code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
async def download(
self,
*,
history_item_ids: typing.Sequence[str],
output_format: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.AsyncIterator[bytes]:
"""
Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.
Parameters
----------
history_item_ids : typing.Sequence[str]
A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
output_format : typing.Optional[str]
Output format to transcode the audio file, can be wav or default.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
The requested audio file, or a zip file containing multiple audio files when multiple history items are requested.
"""
async with self._raw_client.download(
history_item_ids=history_item_ids, output_format=output_format, request_options=request_options
) as r:
async for _chunk in r.data:
yield _chunk |
Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.
Parameters
----------
history_item_ids : typing.Sequence[str]
A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
output_format : typing.Optional[str]
Output format to transcode the audio file, can be wav or default.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
The requested audio file, or a zip file containing multiple audio files when multiple history items are requested.
| download | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/client.py | MIT |
def list(
self,
*,
page_size: typing.Optional[int] = None,
start_after_history_item_id: typing.Optional[str] = None,
voice_id: typing.Optional[str] = None,
search: typing.Optional[str] = None,
source: typing.Optional[HistoryListRequestSource] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[GetSpeechHistoryResponse]:
"""
Returns a list of your generated audio.
Parameters
----------
page_size : typing.Optional[int]
How many history items to return at maximum. Can not exceed 1000, defaults to 100.
start_after_history_item_id : typing.Optional[str]
After which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date.
voice_id : typing.Optional[str]
ID of the voice to be filtered for. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
search : typing.Optional[str]
Search term used for filtering history items. If provided, source becomes required.
source : typing.Optional[HistoryListRequestSource]
Source of the generated history item
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetSpeechHistoryResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/history",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"page_size": page_size,
"start_after_history_item_id": start_after_history_item_id,
"voice_id": voice_id,
"search": search,
"source": source,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetSpeechHistoryResponse,
construct_type(
type_=GetSpeechHistoryResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of your generated audio.
Parameters
----------
page_size : typing.Optional[int]
How many history items to return at maximum. Can not exceed 1000, defaults to 100.
start_after_history_item_id : typing.Optional[str]
After which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date.
voice_id : typing.Optional[str]
ID of the voice to be filtered for. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
search : typing.Optional[str]
Search term used for filtering history items. If provided, source becomes required.
source : typing.Optional[HistoryListRequestSource]
Source of the generated history item
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetSpeechHistoryResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
def get(
self, history_item_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[SpeechHistoryItemResponse]:
"""
Retrieves a history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[SpeechHistoryItemResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/history/{jsonable_encoder(history_item_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
SpeechHistoryItemResponse,
construct_type(
type_=SpeechHistoryItemResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Retrieves a history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[SpeechHistoryItemResponse]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
def delete(
self, history_item_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[DeleteHistoryItemResponse]:
"""
Delete a history item by its ID
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteHistoryItemResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/history/{jsonable_encoder(history_item_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteHistoryItemResponse,
construct_type(
type_=DeleteHistoryItemResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Delete a history item by its ID
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteHistoryItemResponse]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
def get_audio(
self, history_item_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.Iterator[HttpResponse[typing.Iterator[bytes]]]:
"""
Returns the audio of an history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
The audio file of the history item.
"""
with self._client_wrapper.httpx_client.stream(
f"v1/history/{jsonable_encoder(history_item_id)}/audio",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
) as _response:
def _stream() -> HttpResponse[typing.Iterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return HttpResponse(
response=_response, data=(_chunk for _chunk in _response.iter_bytes(chunk_size=_chunk_size))
)
_response.read()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield _stream() |
Returns the audio of an history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
The audio file of the history item.
| get_audio | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
def download(
self,
*,
history_item_ids: typing.Sequence[str],
output_format: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.Iterator[HttpResponse[typing.Iterator[bytes]]]:
"""
Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.
Parameters
----------
history_item_ids : typing.Sequence[str]
A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
output_format : typing.Optional[str]
Output format to transcode the audio file, can be wav or default.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
The requested audio file, or a zip file containing multiple audio files when multiple history items are requested.
"""
with self._client_wrapper.httpx_client.stream(
"v1/history/download",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"history_item_ids": history_item_ids,
"output_format": output_format,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
) as _response:
def _stream() -> HttpResponse[typing.Iterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return HttpResponse(
response=_response, data=(_chunk for _chunk in _response.iter_bytes(chunk_size=_chunk_size))
)
_response.read()
if _response.status_code == 400:
raise BadRequestError(
headers=dict(_response.headers),
body=typing.cast(
typing.Optional[typing.Any],
construct_type(
type_=typing.Optional[typing.Any], # type: ignore
object_=_response.json(),
),
),
)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield _stream() |
Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.
Parameters
----------
history_item_ids : typing.Sequence[str]
A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
output_format : typing.Optional[str]
Output format to transcode the audio file, can be wav or default.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
The requested audio file, or a zip file containing multiple audio files when multiple history items are requested.
| download | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
async def list(
self,
*,
page_size: typing.Optional[int] = None,
start_after_history_item_id: typing.Optional[str] = None,
voice_id: typing.Optional[str] = None,
search: typing.Optional[str] = None,
source: typing.Optional[HistoryListRequestSource] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[GetSpeechHistoryResponse]:
"""
Returns a list of your generated audio.
Parameters
----------
page_size : typing.Optional[int]
How many history items to return at maximum. Can not exceed 1000, defaults to 100.
start_after_history_item_id : typing.Optional[str]
After which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date.
voice_id : typing.Optional[str]
ID of the voice to be filtered for. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
search : typing.Optional[str]
Search term used for filtering history items. If provided, source becomes required.
source : typing.Optional[HistoryListRequestSource]
Source of the generated history item
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetSpeechHistoryResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/history",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"page_size": page_size,
"start_after_history_item_id": start_after_history_item_id,
"voice_id": voice_id,
"search": search,
"source": source,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetSpeechHistoryResponse,
construct_type(
type_=GetSpeechHistoryResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of your generated audio.
Parameters
----------
page_size : typing.Optional[int]
How many history items to return at maximum. Can not exceed 1000, defaults to 100.
start_after_history_item_id : typing.Optional[str]
After which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date.
voice_id : typing.Optional[str]
ID of the voice to be filtered for. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
search : typing.Optional[str]
Search term used for filtering history items. If provided, source becomes required.
source : typing.Optional[HistoryListRequestSource]
Source of the generated history item
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetSpeechHistoryResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
async def get(
self, history_item_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[SpeechHistoryItemResponse]:
"""
Retrieves a history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[SpeechHistoryItemResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/history/{jsonable_encoder(history_item_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
SpeechHistoryItemResponse,
construct_type(
type_=SpeechHistoryItemResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Retrieves a history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[SpeechHistoryItemResponse]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
async def delete(
self, history_item_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[DeleteHistoryItemResponse]:
"""
Delete a history item by its ID
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteHistoryItemResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/history/{jsonable_encoder(history_item_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteHistoryItemResponse,
construct_type(
type_=DeleteHistoryItemResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Delete a history item by its ID
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteHistoryItemResponse]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
async def get_audio(
self, history_item_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]:
"""
Returns the audio of an history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
The audio file of the history item.
"""
async with self._client_wrapper.httpx_client.stream(
f"v1/history/{jsonable_encoder(history_item_id)}/audio",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
) as _response:
async def _stream() -> AsyncHttpResponse[typing.AsyncIterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return AsyncHttpResponse(
response=_response,
data=(_chunk async for _chunk in _response.aiter_bytes(chunk_size=_chunk_size)),
)
await _response.aread()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield await _stream() |
Returns the audio of an history item.
Parameters
----------
history_item_id : str
ID of the history item to be used. You can use the [Get generated items](/docs/api-reference/history/get-all) endpoint to retrieve a list of history items.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
The audio file of the history item.
| get_audio | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
async def download(
self,
*,
history_item_ids: typing.Sequence[str],
output_format: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]:
"""
Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.
Parameters
----------
history_item_ids : typing.Sequence[str]
A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
output_format : typing.Optional[str]
Output format to transcode the audio file, can be wav or default.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
The requested audio file, or a zip file containing multiple audio files when multiple history items are requested.
"""
async with self._client_wrapper.httpx_client.stream(
"v1/history/download",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"history_item_ids": history_item_ids,
"output_format": output_format,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
) as _response:
async def _stream() -> AsyncHttpResponse[typing.AsyncIterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return AsyncHttpResponse(
response=_response,
data=(_chunk async for _chunk in _response.aiter_bytes(chunk_size=_chunk_size)),
)
await _response.aread()
if _response.status_code == 400:
raise BadRequestError(
headers=dict(_response.headers),
body=typing.cast(
typing.Optional[typing.Any],
construct_type(
type_=typing.Optional[typing.Any], # type: ignore
object_=_response.json(),
),
),
)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield await _stream() |
Download one or more history items. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file.
Parameters
----------
history_item_ids : typing.Sequence[str]
A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.
output_format : typing.Optional[str]
Output format to transcode the audio file, can be wav or default.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
The requested audio file, or a zip file containing multiple audio files when multiple history items are requested.
| download | python | elevenlabs/elevenlabs-python | src/elevenlabs/history/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/history/raw_client.py | MIT |
async def list(self, *, request_options: typing.Optional[RequestOptions] = None) -> typing.List[Model]:
"""
Gets a list of available models.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
typing.List[Model]
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.models.list()
asyncio.run(main())
"""
_response = await self._raw_client.list(request_options=request_options)
return _response.data |
Gets a list of available models.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
typing.List[Model]
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.models.list()
asyncio.run(main())
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/models/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/models/client.py | MIT |
def list(self, *, request_options: typing.Optional[RequestOptions] = None) -> HttpResponse[typing.List[Model]]:
"""
Gets a list of available models.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[typing.List[Model]]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/models",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
typing.List[Model],
construct_type(
type_=typing.List[Model], # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets a list of available models.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[typing.List[Model]]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/models/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/models/raw_client.py | MIT |
async def list(
self, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[typing.List[Model]]:
"""
Gets a list of available models.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[typing.List[Model]]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/models",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
typing.List[Model],
construct_type(
type_=typing.List[Model], # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets a list of available models.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[typing.List[Model]]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/models/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/models/raw_client.py | MIT |
def create_from_file(
self,
*,
name: str,
file: typing.Optional[core.File] = OMIT,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddPronunciationDictionaryResponseModel:
"""
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.create_from_file(
name="name",
)
"""
_response = self._raw_client.create_from_file(
name=name,
file=file,
description=description,
workspace_access=workspace_access,
request_options=request_options,
)
return _response.data |
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.create_from_file(
name="name",
)
| create_from_file | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
def create_from_rules(
self,
*,
rules: typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem],
name: str,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess
] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddPronunciationDictionaryResponseModel:
"""
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
from elevenlabs.pronunciation_dictionaries import (
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias,
)
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.create_from_rules(
rules=[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
name="My Dictionary",
)
"""
_response = self._raw_client.create_from_rules(
rules=rules,
name=name,
description=description,
workspace_access=workspace_access,
request_options=request_options,
)
return _response.data |
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
from elevenlabs.pronunciation_dictionaries import (
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias,
)
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.create_from_rules(
rules=[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
name="My Dictionary",
)
| create_from_rules | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
def download(
self, dictionary_id: str, version_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.Iterator[bytes]:
"""
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
The PLS file containing pronunciation dictionary rules
"""
with self._raw_client.download(dictionary_id, version_id, request_options=request_options) as r:
yield from r.data |
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
The PLS file containing pronunciation dictionary rules
| download | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
def get(
self, pronunciation_dictionary_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> GetPronunciationDictionaryMetadataResponse:
"""
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionaryMetadataResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.get(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.get(pronunciation_dictionary_id, request_options=request_options)
return _response.data |
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionaryMetadataResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.get(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
)
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
def list(
self,
*,
cursor: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
sort: typing.Optional[PronunciationDictionariesListRequestSort] = None,
sort_direction: typing.Optional[str] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> GetPronunciationDictionariesMetadataResponseModel:
"""
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionariesMetadataResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.list()
"""
_response = self._raw_client.list(
cursor=cursor,
page_size=page_size,
sort=sort,
sort_direction=sort_direction,
request_options=request_options,
)
return _response.data |
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionariesMetadataResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.list()
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
async def create_from_file(
self,
*,
name: str,
file: typing.Optional[core.File] = OMIT,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddPronunciationDictionaryResponseModel:
"""
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.create_from_file(
name="name",
)
asyncio.run(main())
"""
_response = await self._raw_client.create_from_file(
name=name,
file=file,
description=description,
workspace_access=workspace_access,
request_options=request_options,
)
return _response.data |
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.create_from_file(
name="name",
)
asyncio.run(main())
| create_from_file | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
async def create_from_rules(
self,
*,
rules: typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem],
name: str,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess
] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddPronunciationDictionaryResponseModel:
"""
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
from elevenlabs.pronunciation_dictionaries import (
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias,
)
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.create_from_rules(
rules=[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
name="My Dictionary",
)
asyncio.run(main())
"""
_response = await self._raw_client.create_from_rules(
rules=rules,
name=name,
description=description,
workspace_access=workspace_access,
request_options=request_options,
)
return _response.data |
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddPronunciationDictionaryResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
from elevenlabs.pronunciation_dictionaries import (
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias,
)
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.create_from_rules(
rules=[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
name="My Dictionary",
)
asyncio.run(main())
| create_from_rules | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
async def download(
self, dictionary_id: str, version_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.AsyncIterator[bytes]:
"""
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
The PLS file containing pronunciation dictionary rules
"""
async with self._raw_client.download(dictionary_id, version_id, request_options=request_options) as r:
async for _chunk in r.data:
yield _chunk |
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
The PLS file containing pronunciation dictionary rules
| download | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
async def get(
self, pronunciation_dictionary_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> GetPronunciationDictionaryMetadataResponse:
"""
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionaryMetadataResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.get(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.get(pronunciation_dictionary_id, request_options=request_options)
return _response.data |
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionaryMetadataResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.get(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
async def list(
self,
*,
cursor: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
sort: typing.Optional[PronunciationDictionariesListRequestSort] = None,
sort_direction: typing.Optional[str] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> GetPronunciationDictionariesMetadataResponseModel:
"""
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionariesMetadataResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.list()
asyncio.run(main())
"""
_response = await self._raw_client.list(
cursor=cursor,
page_size=page_size,
sort=sort,
sort_direction=sort_direction,
request_options=request_options,
)
return _response.data |
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetPronunciationDictionariesMetadataResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.list()
asyncio.run(main())
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/client.py | MIT |
def create_from_file(
self,
*,
name: str,
file: typing.Optional[core.File] = OMIT,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddPronunciationDictionaryResponseModel]:
"""
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/pronunciation-dictionaries/add-from-file",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"description": description,
"workspace_access": workspace_access,
},
files={
**({"file": file} if file is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddPronunciationDictionaryResponseModel,
construct_type(
type_=AddPronunciationDictionaryResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
| create_from_file | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
def create_from_rules(
self,
*,
rules: typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem],
name: str,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess
] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddPronunciationDictionaryResponseModel]:
"""
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/pronunciation-dictionaries/add-from-rules",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"rules": convert_and_respect_annotation_metadata(
object_=rules,
annotation=typing.Sequence[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem
],
direction="write",
),
"name": name,
"description": description,
"workspace_access": workspace_access,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddPronunciationDictionaryResponseModel,
construct_type(
type_=AddPronunciationDictionaryResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
| create_from_rules | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
def download(
self, dictionary_id: str, version_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.Iterator[HttpResponse[typing.Iterator[bytes]]]:
"""
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
The PLS file containing pronunciation dictionary rules
"""
with self._client_wrapper.httpx_client.stream(
f"v1/pronunciation-dictionaries/{jsonable_encoder(dictionary_id)}/{jsonable_encoder(version_id)}/download",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
) as _response:
def _stream() -> HttpResponse[typing.Iterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return HttpResponse(
response=_response, data=(_chunk for _chunk in _response.iter_bytes(chunk_size=_chunk_size))
)
_response.read()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield _stream() |
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
The PLS file containing pronunciation dictionary rules
| download | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
def get(
self, pronunciation_dictionary_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[GetPronunciationDictionaryMetadataResponse]:
"""
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetPronunciationDictionaryMetadataResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/pronunciation-dictionaries/{jsonable_encoder(pronunciation_dictionary_id)}/",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetPronunciationDictionaryMetadataResponse,
construct_type(
type_=GetPronunciationDictionaryMetadataResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetPronunciationDictionaryMetadataResponse]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
def list(
self,
*,
cursor: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
sort: typing.Optional[PronunciationDictionariesListRequestSort] = None,
sort_direction: typing.Optional[str] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[GetPronunciationDictionariesMetadataResponseModel]:
"""
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetPronunciationDictionariesMetadataResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/pronunciation-dictionaries/",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"cursor": cursor,
"page_size": page_size,
"sort": sort,
"sort_direction": sort_direction,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetPronunciationDictionariesMetadataResponseModel,
construct_type(
type_=GetPronunciationDictionariesMetadataResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetPronunciationDictionariesMetadataResponseModel]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
async def create_from_file(
self,
*,
name: str,
file: typing.Optional[core.File] = OMIT,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddPronunciationDictionaryResponseModel]:
"""
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/pronunciation-dictionaries/add-from-file",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"description": description,
"workspace_access": workspace_access,
},
files={
**({"file": file} if file is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddPronunciationDictionaryResponseModel,
construct_type(
type_=AddPronunciationDictionaryResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new pronunciation dictionary from a lexicon .PLS file
Parameters
----------
name : str
The name of the pronunciation dictionary, used for identification only.
file : typing.Optional[core.File]
See core.File for more documentation
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[PronunciationDictionariesCreateFromFileRequestWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
| create_from_file | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
async def create_from_rules(
self,
*,
rules: typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem],
name: str,
description: typing.Optional[str] = OMIT,
workspace_access: typing.Optional[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess
] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddPronunciationDictionaryResponseModel]:
"""
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/pronunciation-dictionaries/add-from-rules",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"rules": convert_and_respect_annotation_metadata(
object_=rules,
annotation=typing.Sequence[
BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem
],
direction="write",
),
"name": name,
"description": description,
"workspace_access": workspace_access,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddPronunciationDictionaryResponseModel,
construct_type(
type_=AddPronunciationDictionaryResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new pronunciation dictionary from provided rules.
Parameters
----------
rules : typing.Sequence[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostRulesItem]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
name : str
The name of the pronunciation dictionary, used for identification only.
description : typing.Optional[str]
A description of the pronunciation dictionary, used for identification only.
workspace_access : typing.Optional[BodyAddAPronunciationDictionaryV1PronunciationDictionariesAddFromRulesPostWorkspaceAccess]
Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddPronunciationDictionaryResponseModel]
Successful Response
| create_from_rules | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
async def download(
self, dictionary_id: str, version_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]:
"""
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
The PLS file containing pronunciation dictionary rules
"""
async with self._client_wrapper.httpx_client.stream(
f"v1/pronunciation-dictionaries/{jsonable_encoder(dictionary_id)}/{jsonable_encoder(version_id)}/download",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
) as _response:
async def _stream() -> AsyncHttpResponse[typing.AsyncIterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return AsyncHttpResponse(
response=_response,
data=(_chunk async for _chunk in _response.aiter_bytes(chunk_size=_chunk_size)),
)
await _response.aread()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield await _stream() |
Get a PLS file with a pronunciation dictionary version rules
Parameters
----------
dictionary_id : str
The id of the pronunciation dictionary
version_id : str
The id of the version of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
The PLS file containing pronunciation dictionary rules
| download | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
async def get(
self, pronunciation_dictionary_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[GetPronunciationDictionaryMetadataResponse]:
"""
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetPronunciationDictionaryMetadataResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/pronunciation-dictionaries/{jsonable_encoder(pronunciation_dictionary_id)}/",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetPronunciationDictionaryMetadataResponse,
construct_type(
type_=GetPronunciationDictionaryMetadataResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Get metadata for a pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetPronunciationDictionaryMetadataResponse]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
async def list(
self,
*,
cursor: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
sort: typing.Optional[PronunciationDictionariesListRequestSort] = None,
sort_direction: typing.Optional[str] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[GetPronunciationDictionariesMetadataResponseModel]:
"""
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetPronunciationDictionariesMetadataResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/pronunciation-dictionaries/",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"cursor": cursor,
"page_size": page_size,
"sort": sort,
"sort_direction": sort_direction,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetPronunciationDictionariesMetadataResponseModel,
construct_type(
type_=GetPronunciationDictionariesMetadataResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Get a list of the pronunciation dictionaries you have access to and their metadata
Parameters
----------
cursor : typing.Optional[str]
Used for fetching next page. Cursor is returned in the response.
page_size : typing.Optional[int]
How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30.
sort : typing.Optional[PronunciationDictionariesListRequestSort]
Which field to sort by, one of 'created_at_unix' or 'name'.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'ascending' or 'descending'.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetPronunciationDictionariesMetadataResponseModel]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/raw_client.py | MIT |
def add(
self,
pronunciation_dictionary_id: str,
*,
rules: typing.Sequence[PronunciationDictionaryRule],
request_options: typing.Optional[RequestOptions] = None,
) -> PronunciationDictionaryRulesResponseModel:
"""
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
from elevenlabs.pronunciation_dictionaries.rules import (
PronunciationDictionaryRule_Alias,
)
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.rules.add(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rules=[
PronunciationDictionaryRule_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
)
"""
_response = self._raw_client.add(pronunciation_dictionary_id, rules=rules, request_options=request_options)
return _response.data |
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
from elevenlabs.pronunciation_dictionaries.rules import (
PronunciationDictionaryRule_Alias,
)
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.rules.add(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rules=[
PronunciationDictionaryRule_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
)
| add | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/client.py | MIT |
def remove(
self,
pronunciation_dictionary_id: str,
*,
rule_strings: typing.Sequence[str],
request_options: typing.Optional[RequestOptions] = None,
) -> PronunciationDictionaryRulesResponseModel:
"""
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.rules.remove(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rule_strings=["rule_strings"],
)
"""
_response = self._raw_client.remove(
pronunciation_dictionary_id, rule_strings=rule_strings, request_options=request_options
)
return _response.data |
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.pronunciation_dictionaries.rules.remove(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rule_strings=["rule_strings"],
)
| remove | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/client.py | MIT |
async def add(
self,
pronunciation_dictionary_id: str,
*,
rules: typing.Sequence[PronunciationDictionaryRule],
request_options: typing.Optional[RequestOptions] = None,
) -> PronunciationDictionaryRulesResponseModel:
"""
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
from elevenlabs.pronunciation_dictionaries.rules import (
PronunciationDictionaryRule_Alias,
)
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.rules.add(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rules=[
PronunciationDictionaryRule_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
)
asyncio.run(main())
"""
_response = await self._raw_client.add(
pronunciation_dictionary_id, rules=rules, request_options=request_options
)
return _response.data |
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
from elevenlabs.pronunciation_dictionaries.rules import (
PronunciationDictionaryRule_Alias,
)
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.rules.add(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rules=[
PronunciationDictionaryRule_Alias(
string_to_replace="Thailand",
alias="tie-land",
)
],
)
asyncio.run(main())
| add | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/client.py | MIT |
async def remove(
self,
pronunciation_dictionary_id: str,
*,
rule_strings: typing.Sequence[str],
request_options: typing.Optional[RequestOptions] = None,
) -> PronunciationDictionaryRulesResponseModel:
"""
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.rules.remove(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rule_strings=["rule_strings"],
)
asyncio.run(main())
"""
_response = await self._raw_client.remove(
pronunciation_dictionary_id, rule_strings=rule_strings, request_options=request_options
)
return _response.data |
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PronunciationDictionaryRulesResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.pronunciation_dictionaries.rules.remove(
pronunciation_dictionary_id="21m00Tcm4TlvDq8ikWAM",
rule_strings=["rule_strings"],
)
asyncio.run(main())
| remove | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/client.py | MIT |
def add(
self,
pronunciation_dictionary_id: str,
*,
rules: typing.Sequence[PronunciationDictionaryRule],
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[PronunciationDictionaryRulesResponseModel]:
"""
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/pronunciation-dictionaries/{jsonable_encoder(pronunciation_dictionary_id)}/add-rules",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"rules": convert_and_respect_annotation_metadata(
object_=rules, annotation=typing.Sequence[PronunciationDictionaryRule], direction="write"
),
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
PronunciationDictionaryRulesResponseModel,
construct_type(
type_=PronunciationDictionaryRulesResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
| add | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | MIT |
def remove(
self,
pronunciation_dictionary_id: str,
*,
rule_strings: typing.Sequence[str],
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[PronunciationDictionaryRulesResponseModel]:
"""
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/pronunciation-dictionaries/{jsonable_encoder(pronunciation_dictionary_id)}/remove-rules",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"rule_strings": rule_strings,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
PronunciationDictionaryRulesResponseModel,
construct_type(
type_=PronunciationDictionaryRulesResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
| remove | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | MIT |
async def add(
self,
pronunciation_dictionary_id: str,
*,
rules: typing.Sequence[PronunciationDictionaryRule],
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[PronunciationDictionaryRulesResponseModel]:
"""
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/pronunciation-dictionaries/{jsonable_encoder(pronunciation_dictionary_id)}/add-rules",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"rules": convert_and_respect_annotation_metadata(
object_=rules, annotation=typing.Sequence[PronunciationDictionaryRule], direction="write"
),
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
PronunciationDictionaryRulesResponseModel,
construct_type(
type_=PronunciationDictionaryRulesResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Add rules to the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rules : typing.Sequence[PronunciationDictionaryRule]
List of pronunciation rules. Rule can be either:
an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }
or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
| add | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | MIT |
async def remove(
self,
pronunciation_dictionary_id: str,
*,
rule_strings: typing.Sequence[str],
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[PronunciationDictionaryRulesResponseModel]:
"""
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/pronunciation-dictionaries/{jsonable_encoder(pronunciation_dictionary_id)}/remove-rules",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"rule_strings": rule_strings,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
PronunciationDictionaryRulesResponseModel,
construct_type(
type_=PronunciationDictionaryRulesResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Remove rules from the pronunciation dictionary
Parameters
----------
pronunciation_dictionary_id : str
The id of the pronunciation dictionary
rule_strings : typing.Sequence[str]
List of strings to remove from the pronunciation dictionary.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[PronunciationDictionaryRulesResponseModel]
Successful Response
| remove | python | elevenlabs/elevenlabs-python | src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/pronunciation_dictionaries/rules/raw_client.py | MIT |
def delete(
self, voice_id: str, sample_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteSampleResponse:
"""
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteSampleResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.samples.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
sample_id="VW7YKqPnjY4h39yTbx2L",
)
"""
_response = self._raw_client.delete(voice_id, sample_id, request_options=request_options)
return _response.data |
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteSampleResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.samples.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
sample_id="VW7YKqPnjY4h39yTbx2L",
)
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/samples/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/samples/client.py | MIT |
async def delete(
self, voice_id: str, sample_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteSampleResponse:
"""
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteSampleResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.samples.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
sample_id="VW7YKqPnjY4h39yTbx2L",
)
asyncio.run(main())
"""
_response = await self._raw_client.delete(voice_id, sample_id, request_options=request_options)
return _response.data |
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteSampleResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.samples.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
sample_id="VW7YKqPnjY4h39yTbx2L",
)
asyncio.run(main())
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/samples/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/samples/client.py | MIT |
def delete(
self, voice_id: str, sample_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[DeleteSampleResponse]:
"""
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteSampleResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}/samples/{jsonable_encoder(sample_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteSampleResponse,
construct_type(
type_=DeleteSampleResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteSampleResponse]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/samples/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/samples/raw_client.py | MIT |
async def delete(
self, voice_id: str, sample_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[DeleteSampleResponse]:
"""
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteSampleResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}/samples/{jsonable_encoder(sample_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteSampleResponse,
construct_type(
type_=DeleteSampleResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Removes a sample by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
sample_id : str
ID of the sample to be used. You can use the [Get voices](/docs/api-reference/voices/get) endpoint list all the available samples for a voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteSampleResponse]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/samples/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/samples/raw_client.py | MIT |
def convert(
self,
*,
model_id: str,
enable_logging: typing.Optional[bool] = None,
file: typing.Optional[core.File] = OMIT,
language_code: typing.Optional[str] = OMIT,
tag_audio_events: typing.Optional[bool] = OMIT,
num_speakers: typing.Optional[int] = OMIT,
timestamps_granularity: typing.Optional[SpeechToTextConvertRequestTimestampsGranularity] = OMIT,
diarize: typing.Optional[bool] = OMIT,
additional_formats: typing.Optional[AdditionalFormats] = OMIT,
file_format: typing.Optional[SpeechToTextConvertRequestFileFormat] = OMIT,
cloud_storage_url: typing.Optional[str] = OMIT,
webhook: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> SpeechToTextChunkResponseModel:
"""
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
SpeechToTextChunkResponseModel
Synchronous transcription result
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.speech_to_text.convert(
model_id="model_id",
)
"""
_response = self._raw_client.convert(
model_id=model_id,
enable_logging=enable_logging,
file=file,
language_code=language_code,
tag_audio_events=tag_audio_events,
num_speakers=num_speakers,
timestamps_granularity=timestamps_granularity,
diarize=diarize,
additional_formats=additional_formats,
file_format=file_format,
cloud_storage_url=cloud_storage_url,
webhook=webhook,
request_options=request_options,
)
return _response.data |
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
SpeechToTextChunkResponseModel
Synchronous transcription result
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.speech_to_text.convert(
model_id="model_id",
)
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/speech_to_text/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/speech_to_text/client.py | MIT |
async def convert(
self,
*,
model_id: str,
enable_logging: typing.Optional[bool] = None,
file: typing.Optional[core.File] = OMIT,
language_code: typing.Optional[str] = OMIT,
tag_audio_events: typing.Optional[bool] = OMIT,
num_speakers: typing.Optional[int] = OMIT,
timestamps_granularity: typing.Optional[SpeechToTextConvertRequestTimestampsGranularity] = OMIT,
diarize: typing.Optional[bool] = OMIT,
additional_formats: typing.Optional[AdditionalFormats] = OMIT,
file_format: typing.Optional[SpeechToTextConvertRequestFileFormat] = OMIT,
cloud_storage_url: typing.Optional[str] = OMIT,
webhook: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> SpeechToTextChunkResponseModel:
"""
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
SpeechToTextChunkResponseModel
Synchronous transcription result
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.speech_to_text.convert(
model_id="model_id",
)
asyncio.run(main())
"""
_response = await self._raw_client.convert(
model_id=model_id,
enable_logging=enable_logging,
file=file,
language_code=language_code,
tag_audio_events=tag_audio_events,
num_speakers=num_speakers,
timestamps_granularity=timestamps_granularity,
diarize=diarize,
additional_formats=additional_formats,
file_format=file_format,
cloud_storage_url=cloud_storage_url,
webhook=webhook,
request_options=request_options,
)
return _response.data |
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
SpeechToTextChunkResponseModel
Synchronous transcription result
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.speech_to_text.convert(
model_id="model_id",
)
asyncio.run(main())
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/speech_to_text/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/speech_to_text/client.py | MIT |
def convert(
self,
*,
model_id: str,
enable_logging: typing.Optional[bool] = None,
file: typing.Optional[core.File] = OMIT,
language_code: typing.Optional[str] = OMIT,
tag_audio_events: typing.Optional[bool] = OMIT,
num_speakers: typing.Optional[int] = OMIT,
timestamps_granularity: typing.Optional[SpeechToTextConvertRequestTimestampsGranularity] = OMIT,
diarize: typing.Optional[bool] = OMIT,
additional_formats: typing.Optional[AdditionalFormats] = OMIT,
file_format: typing.Optional[SpeechToTextConvertRequestFileFormat] = OMIT,
cloud_storage_url: typing.Optional[str] = OMIT,
webhook: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[SpeechToTextChunkResponseModel]:
"""
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[SpeechToTextChunkResponseModel]
Synchronous transcription result
"""
_response = self._client_wrapper.httpx_client.request(
"v1/speech-to-text",
base_url=self._client_wrapper.get_environment().base,
method="POST",
params={
"enable_logging": enable_logging,
},
data={
"model_id": model_id,
"language_code": language_code,
"tag_audio_events": tag_audio_events,
"num_speakers": num_speakers,
"timestamps_granularity": timestamps_granularity,
"diarize": diarize,
"additional_formats": additional_formats,
"file_format": file_format,
"cloud_storage_url": cloud_storage_url,
"webhook": webhook,
},
files={
**({"file": file} if file is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
SpeechToTextChunkResponseModel,
construct_type(
type_=SpeechToTextChunkResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[SpeechToTextChunkResponseModel]
Synchronous transcription result
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/speech_to_text/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/speech_to_text/raw_client.py | MIT |
async def convert(
self,
*,
model_id: str,
enable_logging: typing.Optional[bool] = None,
file: typing.Optional[core.File] = OMIT,
language_code: typing.Optional[str] = OMIT,
tag_audio_events: typing.Optional[bool] = OMIT,
num_speakers: typing.Optional[int] = OMIT,
timestamps_granularity: typing.Optional[SpeechToTextConvertRequestTimestampsGranularity] = OMIT,
diarize: typing.Optional[bool] = OMIT,
additional_formats: typing.Optional[AdditionalFormats] = OMIT,
file_format: typing.Optional[SpeechToTextConvertRequestFileFormat] = OMIT,
cloud_storage_url: typing.Optional[str] = OMIT,
webhook: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[SpeechToTextChunkResponseModel]:
"""
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[SpeechToTextChunkResponseModel]
Synchronous transcription result
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/speech-to-text",
base_url=self._client_wrapper.get_environment().base,
method="POST",
params={
"enable_logging": enable_logging,
},
data={
"model_id": model_id,
"language_code": language_code,
"tag_audio_events": tag_audio_events,
"num_speakers": num_speakers,
"timestamps_granularity": timestamps_granularity,
"diarize": diarize,
"additional_formats": additional_formats,
"file_format": file_format,
"cloud_storage_url": cloud_storage_url,
"webhook": webhook,
},
files={
**({"file": file} if file is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
SpeechToTextChunkResponseModel,
construct_type(
type_=SpeechToTextChunkResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Transcribe an audio or video file. If webhook is set to true, the request will be processed asynchronously and results sent to configured webhooks.
Parameters
----------
model_id : str
The ID of the model to use for transcription, currently only 'scribe_v1' and 'scribe_v1_experimental' are available.
enable_logging : typing.Optional[bool]
When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers.
file : typing.Optional[core.File]
See core.File for more documentation
language_code : typing.Optional[str]
An ISO-639-1 or ISO-639-3 language_code corresponding to the language of the audio file. Can sometimes improve transcription performance if known beforehand. Defaults to null, in this case the language is predicted automatically.
tag_audio_events : typing.Optional[bool]
Whether to tag audio events like (laughter), (footsteps), etc. in the transcription.
num_speakers : typing.Optional[int]
The maximum amount of speakers talking in the uploaded file. Can help with predicting who speaks when. The maximum amount of speakers that can be predicted is 32. Defaults to null, in this case the amount of speakers is set to the maximum value the model supports.
timestamps_granularity : typing.Optional[SpeechToTextConvertRequestTimestampsGranularity]
The granularity of the timestamps in the transcription. 'word' provides word-level timestamps and 'character' provides character-level timestamps per word.
diarize : typing.Optional[bool]
Whether to annotate which speaker is currently talking in the uploaded file.
additional_formats : typing.Optional[AdditionalFormats]
A list of additional formats to export the transcript to.
file_format : typing.Optional[SpeechToTextConvertRequestFileFormat]
The format of input audio. Options are 'pcm_s16le_16' or 'other' For `pcm_s16le_16`, the input audio must be 16-bit PCM at a 16kHz sample rate, single channel (mono), and little-endian byte order. Latency will be lower than with passing an encoded waveform.
cloud_storage_url : typing.Optional[str]
The valid AWS S3, Cloudflare R2 or Google Cloud Storage URL of the file to transcribe. Exactly one of the file or cloud_storage_url parameters must be provided. The file must be a valid publicly accessible cloud storage URL. The file size must be less than 2GB. URL can be pre-signed.
webhook : typing.Optional[bool]
Whether to send the transcription result to configured speech-to-text webhooks. If set the request will return early without the transcription, which will be delivered later via webhook. Webhooks can be created and assigned to a transcription task in webhook settings page in the UI.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[SpeechToTextChunkResponseModel]
Synchronous transcription result
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/speech_to_text/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/speech_to_text/raw_client.py | MIT |
def create_podcast(
self,
*,
model_id: str,
mode: BodyCreatePodcastV1StudioPodcastsPostMode,
source: BodyCreatePodcastV1StudioPodcastsPostSource,
quality_preset: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset] = OMIT,
duration_scale: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale] = OMIT,
language: typing.Optional[str] = OMIT,
highlights: typing.Optional[typing.Sequence[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> PodcastProjectResponseModel:
"""
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PodcastProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import (
ElevenLabs,
PodcastConversationModeData,
PodcastTextSource,
)
from elevenlabs.studio import (
BodyCreatePodcastV1StudioPodcastsPostMode_Conversation,
)
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.create_podcast(
model_id="21m00Tcm4TlvDq8ikWAM",
mode=BodyCreatePodcastV1StudioPodcastsPostMode_Conversation(
conversation=PodcastConversationModeData(
host_voice_id="aw1NgEzBg83R7vgmiJt6",
guest_voice_id="aw1NgEzBg83R7vgmiJt7",
),
),
source=PodcastTextSource(
text="This is a test podcast.",
),
)
"""
_response = self._raw_client.create_podcast(
model_id=model_id,
mode=mode,
source=source,
quality_preset=quality_preset,
duration_scale=duration_scale,
language=language,
highlights=highlights,
callback_url=callback_url,
request_options=request_options,
)
return _response.data |
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PodcastProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import (
ElevenLabs,
PodcastConversationModeData,
PodcastTextSource,
)
from elevenlabs.studio import (
BodyCreatePodcastV1StudioPodcastsPostMode_Conversation,
)
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.create_podcast(
model_id="21m00Tcm4TlvDq8ikWAM",
mode=BodyCreatePodcastV1StudioPodcastsPostMode_Conversation(
conversation=PodcastConversationModeData(
host_voice_id="aw1NgEzBg83R7vgmiJt6",
guest_voice_id="aw1NgEzBg83R7vgmiJt7",
),
),
source=PodcastTextSource(
text="This is a test podcast.",
),
)
| create_podcast | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/client.py | MIT |
async def create_podcast(
self,
*,
model_id: str,
mode: BodyCreatePodcastV1StudioPodcastsPostMode,
source: BodyCreatePodcastV1StudioPodcastsPostSource,
quality_preset: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset] = OMIT,
duration_scale: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale] = OMIT,
language: typing.Optional[str] = OMIT,
highlights: typing.Optional[typing.Sequence[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> PodcastProjectResponseModel:
"""
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PodcastProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import (
AsyncElevenLabs,
PodcastConversationModeData,
PodcastTextSource,
)
from elevenlabs.studio import (
BodyCreatePodcastV1StudioPodcastsPostMode_Conversation,
)
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.create_podcast(
model_id="21m00Tcm4TlvDq8ikWAM",
mode=BodyCreatePodcastV1StudioPodcastsPostMode_Conversation(
conversation=PodcastConversationModeData(
host_voice_id="aw1NgEzBg83R7vgmiJt6",
guest_voice_id="aw1NgEzBg83R7vgmiJt7",
),
),
source=PodcastTextSource(
text="This is a test podcast.",
),
)
asyncio.run(main())
"""
_response = await self._raw_client.create_podcast(
model_id=model_id,
mode=mode,
source=source,
quality_preset=quality_preset,
duration_scale=duration_scale,
language=language,
highlights=highlights,
callback_url=callback_url,
request_options=request_options,
)
return _response.data |
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
PodcastProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import (
AsyncElevenLabs,
PodcastConversationModeData,
PodcastTextSource,
)
from elevenlabs.studio import (
BodyCreatePodcastV1StudioPodcastsPostMode_Conversation,
)
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.create_podcast(
model_id="21m00Tcm4TlvDq8ikWAM",
mode=BodyCreatePodcastV1StudioPodcastsPostMode_Conversation(
conversation=PodcastConversationModeData(
host_voice_id="aw1NgEzBg83R7vgmiJt6",
guest_voice_id="aw1NgEzBg83R7vgmiJt7",
),
),
source=PodcastTextSource(
text="This is a test podcast.",
),
)
asyncio.run(main())
| create_podcast | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/client.py | MIT |
def create_podcast(
self,
*,
model_id: str,
mode: BodyCreatePodcastV1StudioPodcastsPostMode,
source: BodyCreatePodcastV1StudioPodcastsPostSource,
quality_preset: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset] = OMIT,
duration_scale: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale] = OMIT,
language: typing.Optional[str] = OMIT,
highlights: typing.Optional[typing.Sequence[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[PodcastProjectResponseModel]:
"""
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[PodcastProjectResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/studio/podcasts",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"model_id": model_id,
"mode": convert_and_respect_annotation_metadata(
object_=mode, annotation=BodyCreatePodcastV1StudioPodcastsPostMode, direction="write"
),
"source": convert_and_respect_annotation_metadata(
object_=source, annotation=BodyCreatePodcastV1StudioPodcastsPostSource, direction="write"
),
"quality_preset": quality_preset,
"duration_scale": duration_scale,
"language": language,
"highlights": highlights,
"callback_url": callback_url,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
PodcastProjectResponseModel,
construct_type(
type_=PodcastProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[PodcastProjectResponseModel]
Successful Response
| create_podcast | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/raw_client.py | MIT |
async def create_podcast(
self,
*,
model_id: str,
mode: BodyCreatePodcastV1StudioPodcastsPostMode,
source: BodyCreatePodcastV1StudioPodcastsPostSource,
quality_preset: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset] = OMIT,
duration_scale: typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale] = OMIT,
language: typing.Optional[str] = OMIT,
highlights: typing.Optional[typing.Sequence[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[PodcastProjectResponseModel]:
"""
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[PodcastProjectResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/studio/podcasts",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"model_id": model_id,
"mode": convert_and_respect_annotation_metadata(
object_=mode, annotation=BodyCreatePodcastV1StudioPodcastsPostMode, direction="write"
),
"source": convert_and_respect_annotation_metadata(
object_=source, annotation=BodyCreatePodcastV1StudioPodcastsPostSource, direction="write"
),
"quality_preset": quality_preset,
"duration_scale": duration_scale,
"language": language,
"highlights": highlights,
"callback_url": callback_url,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
PodcastProjectResponseModel,
construct_type(
type_=PodcastProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create and auto-convert a podcast project. Currently, the LLM cost is covered by us but you will still be charged for the audio generation. In the future, you will be charged for both the LLM and audio generation costs.
Parameters
----------
model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
mode : BodyCreatePodcastV1StudioPodcastsPostMode
The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue.
source : BodyCreatePodcastV1StudioPodcastsPostSource
The source content for the Podcast.
quality_preset : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostQualityPreset]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
duration_scale : typing.Optional[BodyCreatePodcastV1StudioPodcastsPostDurationScale]
Duration of the generated podcast. Must be one of:
short - produces podcasts shorter than 3 minutes.
default - produces podcasts roughly between 3-7 minutes.
long - prodces podcasts longer than 7 minutes.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
highlights : typing.Optional[typing.Sequence[str]]
A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[PodcastProjectResponseModel]
Successful Response
| create_podcast | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/raw_client.py | MIT |
def create(
self,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
default_model_id: str,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
quality_preset: typing.Optional[str] = OMIT,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
genres: typing.Optional[typing.List[str]] = OMIT,
target_audience: typing.Optional[ProjectsCreateRequestTargetAudience] = OMIT,
language: typing.Optional[str] = OMIT,
content_type: typing.Optional[str] = OMIT,
original_publication_date: typing.Optional[str] = OMIT,
mature_content: typing.Optional[bool] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
acx_volume_normalization: typing.Optional[bool] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
pronunciation_dictionary_locators: typing.Optional[typing.List[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
fiction: typing.Optional[ProjectsCreateRequestFiction] = OMIT,
apply_text_normalization: typing.Optional[ProjectsCreateRequestApplyTextNormalization] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
auto_assign_voices: typing.Optional[bool] = OMIT,
source_type: typing.Optional[ProjectsCreateRequestSourceType] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddProjectResponseModel:
"""
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.create(
name="name",
default_title_voice_id="default_title_voice_id",
default_paragraph_voice_id="default_paragraph_voice_id",
default_model_id="default_model_id",
)
"""
_response = self._raw_client.create(
name=name,
default_title_voice_id=default_title_voice_id,
default_paragraph_voice_id=default_paragraph_voice_id,
default_model_id=default_model_id,
from_url=from_url,
from_document=from_document,
quality_preset=quality_preset,
title=title,
author=author,
description=description,
genres=genres,
target_audience=target_audience,
language=language,
content_type=content_type,
original_publication_date=original_publication_date,
mature_content=mature_content,
isbn_number=isbn_number,
acx_volume_normalization=acx_volume_normalization,
volume_normalization=volume_normalization,
pronunciation_dictionary_locators=pronunciation_dictionary_locators,
callback_url=callback_url,
fiction=fiction,
apply_text_normalization=apply_text_normalization,
auto_convert=auto_convert,
auto_assign_voices=auto_assign_voices,
source_type=source_type,
request_options=request_options,
)
return _response.data |
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.create(
name="name",
default_title_voice_id="default_title_voice_id",
default_paragraph_voice_id="default_paragraph_voice_id",
default_model_id="default_model_id",
)
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
def get(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ProjectExtendedResponse:
"""
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectExtendedResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.get(
project_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.get(project_id, request_options=request_options)
return _response.data |
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectExtendedResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.get(
project_id="21m00Tcm4TlvDq8ikWAM",
)
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
def update(
self,
project_id: str,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditProjectResponseModel:
"""
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.update(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Project 1",
default_title_voice_id="21m00Tcm4TlvDq8ikWAM",
default_paragraph_voice_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.update(
project_id,
name=name,
default_title_voice_id=default_title_voice_id,
default_paragraph_voice_id=default_paragraph_voice_id,
title=title,
author=author,
isbn_number=isbn_number,
volume_normalization=volume_normalization,
request_options=request_options,
)
return _response.data |
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.update(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Project 1",
default_title_voice_id="21m00Tcm4TlvDq8ikWAM",
default_paragraph_voice_id="21m00Tcm4TlvDq8ikWAM",
)
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
def delete(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteProjectResponseModel:
"""
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.delete(project_id, request_options=request_options)
return _response.data |
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
)
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
def convert(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ConvertProjectResponseModel:
"""
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.convert(project_id, request_options=request_options)
return _response.data |
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
)
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
async def list(self, *, request_options: typing.Optional[RequestOptions] = None) -> GetProjectsResponse:
"""
Returns a list of your Studio projects with metadata.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetProjectsResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.list()
asyncio.run(main())
"""
_response = await self._raw_client.list(request_options=request_options)
return _response.data |
Returns a list of your Studio projects with metadata.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetProjectsResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.list()
asyncio.run(main())
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
async def create(
self,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
default_model_id: str,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
quality_preset: typing.Optional[str] = OMIT,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
genres: typing.Optional[typing.List[str]] = OMIT,
target_audience: typing.Optional[ProjectsCreateRequestTargetAudience] = OMIT,
language: typing.Optional[str] = OMIT,
content_type: typing.Optional[str] = OMIT,
original_publication_date: typing.Optional[str] = OMIT,
mature_content: typing.Optional[bool] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
acx_volume_normalization: typing.Optional[bool] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
pronunciation_dictionary_locators: typing.Optional[typing.List[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
fiction: typing.Optional[ProjectsCreateRequestFiction] = OMIT,
apply_text_normalization: typing.Optional[ProjectsCreateRequestApplyTextNormalization] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
auto_assign_voices: typing.Optional[bool] = OMIT,
source_type: typing.Optional[ProjectsCreateRequestSourceType] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddProjectResponseModel:
"""
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.create(
name="name",
default_title_voice_id="default_title_voice_id",
default_paragraph_voice_id="default_paragraph_voice_id",
default_model_id="default_model_id",
)
asyncio.run(main())
"""
_response = await self._raw_client.create(
name=name,
default_title_voice_id=default_title_voice_id,
default_paragraph_voice_id=default_paragraph_voice_id,
default_model_id=default_model_id,
from_url=from_url,
from_document=from_document,
quality_preset=quality_preset,
title=title,
author=author,
description=description,
genres=genres,
target_audience=target_audience,
language=language,
content_type=content_type,
original_publication_date=original_publication_date,
mature_content=mature_content,
isbn_number=isbn_number,
acx_volume_normalization=acx_volume_normalization,
volume_normalization=volume_normalization,
pronunciation_dictionary_locators=pronunciation_dictionary_locators,
callback_url=callback_url,
fiction=fiction,
apply_text_normalization=apply_text_normalization,
auto_convert=auto_convert,
auto_assign_voices=auto_assign_voices,
source_type=source_type,
request_options=request_options,
)
return _response.data |
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.create(
name="name",
default_title_voice_id="default_title_voice_id",
default_paragraph_voice_id="default_paragraph_voice_id",
default_model_id="default_model_id",
)
asyncio.run(main())
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
async def get(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ProjectExtendedResponse:
"""
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectExtendedResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.get(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.get(project_id, request_options=request_options)
return _response.data |
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectExtendedResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.get(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
async def update(
self,
project_id: str,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditProjectResponseModel:
"""
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.update(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Project 1",
default_title_voice_id="21m00Tcm4TlvDq8ikWAM",
default_paragraph_voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.update(
project_id,
name=name,
default_title_voice_id=default_title_voice_id,
default_paragraph_voice_id=default_paragraph_voice_id,
title=title,
author=author,
isbn_number=isbn_number,
volume_normalization=volume_normalization,
request_options=request_options,
)
return _response.data |
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.update(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Project 1",
default_title_voice_id="21m00Tcm4TlvDq8ikWAM",
default_paragraph_voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
async def delete(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteProjectResponseModel:
"""
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.delete(project_id, request_options=request_options)
return _response.data |
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
async def convert(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ConvertProjectResponseModel:
"""
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.convert(project_id, request_options=request_options)
return _response.data |
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/client.py | MIT |
def list(self, *, request_options: typing.Optional[RequestOptions] = None) -> HttpResponse[GetProjectsResponse]:
"""
Returns a list of your Studio projects with metadata.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetProjectsResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/studio/projects",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetProjectsResponse,
construct_type(
type_=GetProjectsResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of your Studio projects with metadata.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetProjectsResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
def create(
self,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
default_model_id: str,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
quality_preset: typing.Optional[str] = OMIT,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
genres: typing.Optional[typing.List[str]] = OMIT,
target_audience: typing.Optional[ProjectsCreateRequestTargetAudience] = OMIT,
language: typing.Optional[str] = OMIT,
content_type: typing.Optional[str] = OMIT,
original_publication_date: typing.Optional[str] = OMIT,
mature_content: typing.Optional[bool] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
acx_volume_normalization: typing.Optional[bool] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
pronunciation_dictionary_locators: typing.Optional[typing.List[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
fiction: typing.Optional[ProjectsCreateRequestFiction] = OMIT,
apply_text_normalization: typing.Optional[ProjectsCreateRequestApplyTextNormalization] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
auto_assign_voices: typing.Optional[bool] = OMIT,
source_type: typing.Optional[ProjectsCreateRequestSourceType] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddProjectResponseModel]:
"""
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddProjectResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/studio/projects",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"default_title_voice_id": default_title_voice_id,
"default_paragraph_voice_id": default_paragraph_voice_id,
"default_model_id": default_model_id,
"from_url": from_url,
"quality_preset": quality_preset,
"title": title,
"author": author,
"description": description,
"genres": genres,
"target_audience": target_audience,
"language": language,
"content_type": content_type,
"original_publication_date": original_publication_date,
"mature_content": mature_content,
"isbn_number": isbn_number,
"acx_volume_normalization": acx_volume_normalization,
"volume_normalization": volume_normalization,
"pronunciation_dictionary_locators": pronunciation_dictionary_locators,
"callback_url": callback_url,
"fiction": fiction,
"apply_text_normalization": apply_text_normalization,
"auto_convert": auto_convert,
"auto_assign_voices": auto_assign_voices,
"source_type": source_type,
},
files={
**({"from_document": from_document} if from_document is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddProjectResponseModel,
construct_type(
type_=AddProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddProjectResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
def get(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[ProjectExtendedResponse]:
"""
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ProjectExtendedResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ProjectExtendedResponse,
construct_type(
type_=ProjectExtendedResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ProjectExtendedResponse]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
def update(
self,
project_id: str,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[EditProjectResponseModel]:
"""
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditProjectResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"default_title_voice_id": default_title_voice_id,
"default_paragraph_voice_id": default_paragraph_voice_id,
"title": title,
"author": author,
"isbn_number": isbn_number,
"volume_normalization": volume_normalization,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditProjectResponseModel,
construct_type(
type_=EditProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditProjectResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
def delete(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[DeleteProjectResponseModel]:
"""
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteProjectResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteProjectResponseModel,
construct_type(
type_=DeleteProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteProjectResponseModel]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
def convert(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[ConvertProjectResponseModel]:
"""
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ConvertProjectResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/convert",
base_url=self._client_wrapper.get_environment().base,
method="POST",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ConvertProjectResponseModel,
construct_type(
type_=ConvertProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ConvertProjectResponseModel]
Successful Response
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
async def list(
self, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[GetProjectsResponse]:
"""
Returns a list of your Studio projects with metadata.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetProjectsResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/studio/projects",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetProjectsResponse,
construct_type(
type_=GetProjectsResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of your Studio projects with metadata.
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetProjectsResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
async def create(
self,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
default_model_id: str,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
quality_preset: typing.Optional[str] = OMIT,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
genres: typing.Optional[typing.List[str]] = OMIT,
target_audience: typing.Optional[ProjectsCreateRequestTargetAudience] = OMIT,
language: typing.Optional[str] = OMIT,
content_type: typing.Optional[str] = OMIT,
original_publication_date: typing.Optional[str] = OMIT,
mature_content: typing.Optional[bool] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
acx_volume_normalization: typing.Optional[bool] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
pronunciation_dictionary_locators: typing.Optional[typing.List[str]] = OMIT,
callback_url: typing.Optional[str] = OMIT,
fiction: typing.Optional[ProjectsCreateRequestFiction] = OMIT,
apply_text_normalization: typing.Optional[ProjectsCreateRequestApplyTextNormalization] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
auto_assign_voices: typing.Optional[bool] = OMIT,
source_type: typing.Optional[ProjectsCreateRequestSourceType] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddProjectResponseModel]:
"""
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddProjectResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/studio/projects",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"default_title_voice_id": default_title_voice_id,
"default_paragraph_voice_id": default_paragraph_voice_id,
"default_model_id": default_model_id,
"from_url": from_url,
"quality_preset": quality_preset,
"title": title,
"author": author,
"description": description,
"genres": genres,
"target_audience": target_audience,
"language": language,
"content_type": content_type,
"original_publication_date": original_publication_date,
"mature_content": mature_content,
"isbn_number": isbn_number,
"acx_volume_normalization": acx_volume_normalization,
"volume_normalization": volume_normalization,
"pronunciation_dictionary_locators": pronunciation_dictionary_locators,
"callback_url": callback_url,
"fiction": fiction,
"apply_text_normalization": apply_text_normalization,
"auto_convert": auto_convert,
"auto_assign_voices": auto_assign_voices,
"source_type": source_type,
},
files={
**({"from_document": from_document} if from_document is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddProjectResponseModel,
construct_type(
type_=AddProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new Studio project, it can be either initialized as blank, from a document or from a URL.
Parameters
----------
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
default_model_id : str
The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
quality_preset : typing.Optional[str]
Output quality of the generated audio. Must be one of:
standard - standard output format, 128kbps with 44.1kHz sample rate.
high - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. Using this setting increases the credit cost by 20%.
ultra - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. Using this setting increases the credit cost by 50%.
ultra lossless - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format. Using this setting increases the credit cost by 100%.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
description : typing.Optional[str]
An optional description of the Studio project.
genres : typing.Optional[typing.List[str]]
An optional list of genres associated with the Studio project.
target_audience : typing.Optional[ProjectsCreateRequestTargetAudience]
An optional target audience of the Studio project.
language : typing.Optional[str]
An optional language of the Studio project. Two-letter language code (ISO 639-1).
content_type : typing.Optional[str]
An optional content type of the Studio project.
original_publication_date : typing.Optional[str]
An optional original publication date of the Studio project, in the format YYYY-MM-DD or YYYY.
mature_content : typing.Optional[bool]
An optional specification of whether this Studio project contains mature content.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
acx_volume_normalization : typing.Optional[bool]
[Deprecated] When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
pronunciation_dictionary_locators : typing.Optional[typing.List[str]]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
callback_url : typing.Optional[str]
A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion
fiction : typing.Optional[ProjectsCreateRequestFiction]
An optional specification of whether the content of this Studio project is fiction.
apply_text_normalization : typing.Optional[ProjectsCreateRequestApplyTextNormalization]
This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.
When set to 'auto', the system will automatically decide whether to apply text normalization
(e.g., spelling out numbers). With 'on', text normalization will always be applied, while
with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
auto_assign_voices : typing.Optional[bool]
[Alpha Feature] Whether automatically assign voices to phrases in the create Project.
source_type : typing.Optional[ProjectsCreateRequestSourceType]
The type of Studio project to create.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddProjectResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
async def get(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[ProjectExtendedResponse]:
"""
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ProjectExtendedResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ProjectExtendedResponse,
construct_type(
type_=ProjectExtendedResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns information about a specific Studio project. This endpoint returns more detailed information about a project than `GET /v1/studio`.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ProjectExtendedResponse]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
async def update(
self,
project_id: str,
*,
name: str,
default_title_voice_id: str,
default_paragraph_voice_id: str,
title: typing.Optional[str] = OMIT,
author: typing.Optional[str] = OMIT,
isbn_number: typing.Optional[str] = OMIT,
volume_normalization: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[EditProjectResponseModel]:
"""
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditProjectResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"default_title_voice_id": default_title_voice_id,
"default_paragraph_voice_id": default_paragraph_voice_id,
"title": title,
"author": author,
"isbn_number": isbn_number,
"volume_normalization": volume_normalization,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditProjectResponseModel,
construct_type(
type_=EditProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Updates the specified Studio project by setting the values of the parameters passed.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
name : str
The name of the Studio project, used for identification only.
default_title_voice_id : str
The voice_id that corresponds to the default voice used for new titles.
default_paragraph_voice_id : str
The voice_id that corresponds to the default voice used for new paragraphs.
title : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
author : typing.Optional[str]
An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download.
isbn_number : typing.Optional[str]
An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download.
volume_normalization : typing.Optional[bool]
When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditProjectResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
async def delete(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[DeleteProjectResponseModel]:
"""
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteProjectResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteProjectResponseModel,
construct_type(
type_=DeleteProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Deletes a Studio project.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteProjectResponseModel]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
async def convert(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[ConvertProjectResponseModel]:
"""
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ConvertProjectResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/convert",
base_url=self._client_wrapper.get_environment().base,
method="POST",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ConvertProjectResponseModel,
construct_type(
type_=ConvertProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Starts conversion of a Studio project and all of its chapters.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ConvertProjectResponseModel]
Successful Response
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/raw_client.py | MIT |
def create(
self,
project_id: str,
*,
name: str,
from_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddChapterResponseModel:
"""
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.create(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Chapter 1",
)
"""
_response = self._raw_client.create(project_id, name=name, from_url=from_url, request_options=request_options)
return _response.data |
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.create(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Chapter 1",
)
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
def get(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ChapterWithContentResponseModel:
"""
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterWithContentResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.get(project_id, chapter_id, request_options=request_options)
return _response.data |
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterWithContentResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
def update(
self,
project_id: str,
chapter_id: str,
*,
name: typing.Optional[str] = OMIT,
content: typing.Optional[ChapterContentInputModel] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditChapterResponseModel:
"""
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.update(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.update(
project_id, chapter_id, name=name, content=content, request_options=request_options
)
return _response.data |
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.update(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
def delete(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteChapterResponseModel:
"""
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.delete(project_id, chapter_id, request_options=request_options)
return _response.data |
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
def convert(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ConvertChapterResponseModel:
"""
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.convert(project_id, chapter_id, request_options=request_options)
return _response.data |
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertChapterResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
async def list(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> GetChaptersResponse:
"""
Returns a list of a Studio project's chapters.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetChaptersResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.list(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.list(project_id, request_options=request_options)
return _response.data |
Returns a list of a Studio project's chapters.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetChaptersResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.list(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
async def create(
self,
project_id: str,
*,
name: str,
from_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddChapterResponseModel:
"""
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.create(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Chapter 1",
)
asyncio.run(main())
"""
_response = await self._raw_client.create(
project_id, name=name, from_url=from_url, request_options=request_options
)
return _response.data |
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.create(
project_id="21m00Tcm4TlvDq8ikWAM",
name="Chapter 1",
)
asyncio.run(main())
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
async def get(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ChapterWithContentResponseModel:
"""
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterWithContentResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.get(project_id, chapter_id, request_options=request_options)
return _response.data |
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterWithContentResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
async def update(
self,
project_id: str,
chapter_id: str,
*,
name: typing.Optional[str] = OMIT,
content: typing.Optional[ChapterContentInputModel] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditChapterResponseModel:
"""
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.update(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.update(
project_id, chapter_id, name=name, content=content, request_options=request_options
)
return _response.data |
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.update(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
async def delete(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteChapterResponseModel:
"""
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.delete(project_id, chapter_id, request_options=request_options)
return _response.data |
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.delete(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
async def convert(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ConvertChapterResponseModel:
"""
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.convert(project_id, chapter_id, request_options=request_options)
return _response.data |
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ConvertChapterResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.convert(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/client.py | MIT |
def list(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[GetChaptersResponse]:
"""
Returns a list of a Studio project's chapters.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetChaptersResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetChaptersResponse,
construct_type(
type_=GetChaptersResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of a Studio project's chapters.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetChaptersResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
def create(
self,
project_id: str,
*,
name: str,
from_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddChapterResponseModel]:
"""
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddChapterResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"from_url": from_url,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddChapterResponseModel,
construct_type(
type_=AddChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddChapterResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
def get(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[ChapterWithContentResponseModel]:
"""
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ChapterWithContentResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ChapterWithContentResponseModel,
construct_type(
type_=ChapterWithContentResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ChapterWithContentResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
def update(
self,
project_id: str,
chapter_id: str,
*,
name: typing.Optional[str] = OMIT,
content: typing.Optional[ChapterContentInputModel] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[EditChapterResponseModel]:
"""
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditChapterResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"content": convert_and_respect_annotation_metadata(
object_=content, annotation=ChapterContentInputModel, direction="write"
),
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditChapterResponseModel,
construct_type(
type_=EditChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditChapterResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
def delete(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[DeleteChapterResponseModel]:
"""
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteChapterResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteChapterResponseModel,
construct_type(
type_=DeleteChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteChapterResponseModel]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
def convert(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[ConvertChapterResponseModel]:
"""
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ConvertChapterResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/convert",
base_url=self._client_wrapper.get_environment().base,
method="POST",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ConvertChapterResponseModel,
construct_type(
type_=ConvertChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ConvertChapterResponseModel]
Successful Response
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
async def list(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[GetChaptersResponse]:
"""
Returns a list of a Studio project's chapters.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetChaptersResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetChaptersResponse,
construct_type(
type_=GetChaptersResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of a Studio project's chapters.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetChaptersResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
async def create(
self,
project_id: str,
*,
name: str,
from_url: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddChapterResponseModel]:
"""
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddChapterResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"from_url": from_url,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddChapterResponseModel,
construct_type(
type_=AddChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new chapter either as blank or from a URL.
Parameters
----------
project_id : str
The ID of the Studio project.
name : str
The name of the chapter, used for identification only.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddChapterResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
async def get(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[ChapterWithContentResponseModel]:
"""
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ChapterWithContentResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ChapterWithContentResponseModel,
construct_type(
type_=ChapterWithContentResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns information about a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ChapterWithContentResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
async def update(
self,
project_id: str,
chapter_id: str,
*,
name: typing.Optional[str] = OMIT,
content: typing.Optional[ChapterContentInputModel] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[EditChapterResponseModel]:
"""
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditChapterResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"content": convert_and_respect_annotation_metadata(
object_=content, annotation=ChapterContentInputModel, direction="write"
),
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditChapterResponseModel,
construct_type(
type_=EditChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Updates a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
name : typing.Optional[str]
The name of the chapter, used for identification only.
content : typing.Optional[ChapterContentInputModel]
The chapter content to use.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditChapterResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
async def delete(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[DeleteChapterResponseModel]:
"""
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteChapterResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteChapterResponseModel,
construct_type(
type_=DeleteChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Deletes a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteChapterResponseModel]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
async def convert(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[ConvertChapterResponseModel]:
"""
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ConvertChapterResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/convert",
base_url=self._client_wrapper.get_environment().base,
method="POST",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ConvertChapterResponseModel,
construct_type(
type_=ConvertChapterResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Starts conversion of a specific chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ConvertChapterResponseModel]
Successful Response
| convert | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/raw_client.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.