code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def list(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ChapterSnapshotsResponse:
"""
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotsResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.list(project_id, chapter_id, request_options=request_options)
return _response.data |
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotsResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/client.py | MIT |
def get(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
request_options: typing.Optional[RequestOptions] = None,
) -> ChapterSnapshotExtendedResponseModel:
"""
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotExtendedResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
chapter_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.get(project_id, chapter_id, chapter_snapshot_id, request_options=request_options)
return _response.data |
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotExtendedResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.chapters.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
chapter_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/client.py | MIT |
def stream(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.Iterator[bytes]:
"""
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
Streaming audio data
"""
with self._raw_client.stream(
project_id,
chapter_id,
chapter_snapshot_id,
convert_to_mpeg=convert_to_mpeg,
request_options=request_options,
) as r:
yield from r.data |
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
Streaming audio data
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/client.py | MIT |
async def list(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ChapterSnapshotsResponse:
"""
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotsResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.list(project_id, chapter_id, request_options=request_options)
return _response.data |
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotsResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/client.py | MIT |
async def get(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
request_options: typing.Optional[RequestOptions] = None,
) -> ChapterSnapshotExtendedResponseModel:
"""
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotExtendedResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
chapter_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.get(
project_id, chapter_id, chapter_snapshot_id, request_options=request_options
)
return _response.data |
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ChapterSnapshotExtendedResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.chapters.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
chapter_id="21m00Tcm4TlvDq8ikWAM",
chapter_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/client.py | MIT |
async def stream(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.AsyncIterator[bytes]:
"""
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
Streaming audio data
"""
async with self._raw_client.stream(
project_id,
chapter_id,
chapter_snapshot_id,
convert_to_mpeg=convert_to_mpeg,
request_options=request_options,
) as r:
async for _chunk in r.data:
yield _chunk |
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
Streaming audio data
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/client.py | MIT |
def list(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[ChapterSnapshotsResponse]:
"""
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ChapterSnapshotsResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/snapshots",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ChapterSnapshotsResponse,
construct_type(
type_=ChapterSnapshotsResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ChapterSnapshotsResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | MIT |
def get(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[ChapterSnapshotExtendedResponseModel]:
"""
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ChapterSnapshotExtendedResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/snapshots/{jsonable_encoder(chapter_snapshot_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ChapterSnapshotExtendedResponseModel,
construct_type(
type_=ChapterSnapshotExtendedResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ChapterSnapshotExtendedResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | MIT |
def stream(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.Iterator[HttpResponse[typing.Iterator[bytes]]]:
"""
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
Streaming audio data
"""
with self._client_wrapper.httpx_client.stream(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/snapshots/{jsonable_encoder(chapter_snapshot_id)}/stream",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"convert_to_mpeg": convert_to_mpeg,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
) as _response:
def _stream() -> HttpResponse[typing.Iterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return HttpResponse(
response=_response, data=(_chunk for _chunk in _response.iter_bytes(chunk_size=_chunk_size))
)
_response.read()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield _stream() |
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
Streaming audio data
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | MIT |
async def list(
self, project_id: str, chapter_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[ChapterSnapshotsResponse]:
"""
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ChapterSnapshotsResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/snapshots",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ChapterSnapshotsResponse,
construct_type(
type_=ChapterSnapshotsResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets information about all the snapshots of a chapter. Each snapshot can be downloaded as audio. Whenever a chapter is converted a snapshot will automatically be created.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ChapterSnapshotsResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | MIT |
async def get(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[ChapterSnapshotExtendedResponseModel]:
"""
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ChapterSnapshotExtendedResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/snapshots/{jsonable_encoder(chapter_snapshot_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ChapterSnapshotExtendedResponseModel,
construct_type(
type_=ChapterSnapshotExtendedResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns the chapter snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
chapter_id : str
The ID of the chapter.
chapter_snapshot_id : str
The ID of the chapter snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ChapterSnapshotExtendedResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | MIT |
async def stream(
self,
project_id: str,
chapter_id: str,
chapter_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]:
"""
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
Streaming audio data
"""
async with self._client_wrapper.httpx_client.stream(
f"v1/studio/projects/{jsonable_encoder(project_id)}/chapters/{jsonable_encoder(chapter_id)}/snapshots/{jsonable_encoder(chapter_snapshot_id)}/stream",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"convert_to_mpeg": convert_to_mpeg,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
) as _response:
async def _stream() -> AsyncHttpResponse[typing.AsyncIterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return AsyncHttpResponse(
response=_response,
data=(_chunk async for _chunk in _response.aiter_bytes(chunk_size=_chunk_size)),
)
await _response.aread()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield await _stream() |
Stream the audio from a chapter snapshot. Use `GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots` to return the snapshots of a chapter.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
chapter_id : str
The ID of the chapter to be used. You can use the [List project chapters](/docs/api-reference/studio/get-chapters) endpoint to list all the available chapters.
chapter_snapshot_id : str
The ID of the chapter snapshot to be used. You can use the [List project chapter snapshots](/docs/api-reference/studio/get-snapshots) endpoint to list all the available snapshots.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
Streaming audio data
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/chapters/snapshots/raw_client.py | MIT |
def update(
self,
project_id: str,
*,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditProjectResponseModel:
"""
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.content.update(
project_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.update(
project_id,
from_url=from_url,
from_document=from_document,
auto_convert=auto_convert,
request_options=request_options,
)
return _response.data |
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.content.update(
project_id="21m00Tcm4TlvDq8ikWAM",
)
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/content/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/content/client.py | MIT |
async def update(
self,
project_id: str,
*,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditProjectResponseModel:
"""
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.content.update(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.update(
project_id,
from_url=from_url,
from_document=from_document,
auto_convert=auto_convert,
request_options=request_options,
)
return _response.data |
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditProjectResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.content.update(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/content/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/content/client.py | MIT |
def update(
self,
project_id: str,
*,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[EditProjectResponseModel]:
"""
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditProjectResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/content",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"from_url": from_url,
"auto_convert": auto_convert,
},
files={
**({"from_document": from_document} if from_document is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditProjectResponseModel,
construct_type(
type_=EditProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditProjectResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/content/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/content/raw_client.py | MIT |
async def update(
self,
project_id: str,
*,
from_url: typing.Optional[str] = OMIT,
from_document: typing.Optional[core.File] = OMIT,
auto_convert: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[EditProjectResponseModel]:
"""
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditProjectResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/content",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"from_url": from_url,
"auto_convert": auto_convert,
},
files={
**({"from_document": from_document} if from_document is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditProjectResponseModel,
construct_type(
type_=EditProjectResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Updates Studio project content.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
from_url : typing.Optional[str]
An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' must be null. If neither 'from_url' or 'from_document' are provided we will initialize the Studio project as blank.
from_document : typing.Optional[core.File]
See core.File for more documentation
auto_convert : typing.Optional[bool]
Whether to auto convert the Studio project to audio or not.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditProjectResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/content/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/content/raw_client.py | MIT |
def create(
self,
project_id: str,
*,
pronunciation_dictionary_locators: typing.Sequence[PronunciationDictionaryVersionLocator],
invalidate_affected_text: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> CreatePronunciationDictionaryResponseModel:
"""
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
CreatePronunciationDictionaryResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs, PronunciationDictionaryVersionLocator
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.pronunciation_dictionaries.create(
project_id="21m00Tcm4TlvDq8ikWAM",
pronunciation_dictionary_locators=[
PronunciationDictionaryVersionLocator(
pronunciation_dictionary_id="pronunciation_dictionary_id",
)
],
)
"""
_response = self._raw_client.create(
project_id,
pronunciation_dictionary_locators=pronunciation_dictionary_locators,
invalidate_affected_text=invalidate_affected_text,
request_options=request_options,
)
return _response.data |
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
CreatePronunciationDictionaryResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs, PronunciationDictionaryVersionLocator
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.pronunciation_dictionaries.create(
project_id="21m00Tcm4TlvDq8ikWAM",
pronunciation_dictionary_locators=[
PronunciationDictionaryVersionLocator(
pronunciation_dictionary_id="pronunciation_dictionary_id",
)
],
)
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/pronunciation_dictionaries/client.py | MIT |
async def create(
self,
project_id: str,
*,
pronunciation_dictionary_locators: typing.Sequence[PronunciationDictionaryVersionLocator],
invalidate_affected_text: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> CreatePronunciationDictionaryResponseModel:
"""
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
CreatePronunciationDictionaryResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs, PronunciationDictionaryVersionLocator
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.pronunciation_dictionaries.create(
project_id="21m00Tcm4TlvDq8ikWAM",
pronunciation_dictionary_locators=[
PronunciationDictionaryVersionLocator(
pronunciation_dictionary_id="pronunciation_dictionary_id",
)
],
)
asyncio.run(main())
"""
_response = await self._raw_client.create(
project_id,
pronunciation_dictionary_locators=pronunciation_dictionary_locators,
invalidate_affected_text=invalidate_affected_text,
request_options=request_options,
)
return _response.data |
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
CreatePronunciationDictionaryResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs, PronunciationDictionaryVersionLocator
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.pronunciation_dictionaries.create(
project_id="21m00Tcm4TlvDq8ikWAM",
pronunciation_dictionary_locators=[
PronunciationDictionaryVersionLocator(
pronunciation_dictionary_id="pronunciation_dictionary_id",
)
],
)
asyncio.run(main())
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/pronunciation_dictionaries/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/pronunciation_dictionaries/client.py | MIT |
def create(
self,
project_id: str,
*,
pronunciation_dictionary_locators: typing.Sequence[PronunciationDictionaryVersionLocator],
invalidate_affected_text: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[CreatePronunciationDictionaryResponseModel]:
"""
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[CreatePronunciationDictionaryResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/pronunciation-dictionaries",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"pronunciation_dictionary_locators": convert_and_respect_annotation_metadata(
object_=pronunciation_dictionary_locators,
annotation=typing.Sequence[PronunciationDictionaryVersionLocator],
direction="write",
),
"invalidate_affected_text": invalidate_affected_text,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
CreatePronunciationDictionaryResponseModel,
construct_type(
type_=CreatePronunciationDictionaryResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[CreatePronunciationDictionaryResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/pronunciation_dictionaries/raw_client.py | MIT |
async def create(
self,
project_id: str,
*,
pronunciation_dictionary_locators: typing.Sequence[PronunciationDictionaryVersionLocator],
invalidate_affected_text: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[CreatePronunciationDictionaryResponseModel]:
"""
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"Vmd4Zor6fplcA7WrINey\",\"version_id\":\"hRPaxjlTdR7wFMhV4w0b\"}"' --form 'pronunciation_dictionary_locators="{\"pronunciation_dictionary_id\":\"JzWtcGQMJ6bnlWwyMo7e\",\"version_id\":\"lbmwxiLu4q6txYxgdZqn\"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[CreatePronunciationDictionaryResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/pronunciation-dictionaries",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"pronunciation_dictionary_locators": convert_and_respect_annotation_metadata(
object_=pronunciation_dictionary_locators,
annotation=typing.Sequence[PronunciationDictionaryVersionLocator],
direction="write",
),
"invalidate_affected_text": invalidate_affected_text,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
CreatePronunciationDictionaryResponseModel,
construct_type(
type_=CreatePronunciationDictionaryResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create a set of pronunciation dictionaries acting on a project. This will automatically mark text within this project as requiring reconverting where the new dictionary would apply or the old one no longer does.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
pronunciation_dictionary_locators : typing.Sequence[PronunciationDictionaryVersionLocator]
A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"Vmd4Zor6fplcA7WrINey","version_id":"hRPaxjlTdR7wFMhV4w0b"}"' --form 'pronunciation_dictionary_locators="{"pronunciation_dictionary_id":"JzWtcGQMJ6bnlWwyMo7e","version_id":"lbmwxiLu4q6txYxgdZqn"}"'. Note that multiple dictionaries are not currently supported by our UI which will only show the first.
invalidate_affected_text : typing.Optional[bool]
This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[CreatePronunciationDictionaryResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/pronunciation_dictionaries/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/pronunciation_dictionaries/raw_client.py | MIT |
def list(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ProjectSnapshotsResponse:
"""
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotsResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.list(project_id, request_options=request_options)
return _response.data |
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotsResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
)
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
def get(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ProjectSnapshotExtendedResponseModel:
"""
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotExtendedResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
project_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.get(project_id, project_snapshot_id, request_options=request_options)
return _response.data |
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotExtendedResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.studio.projects.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
project_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
def stream(
self,
project_id: str,
project_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.Iterator[bytes]:
"""
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
Successful Response
"""
with self._raw_client.stream(
project_id, project_snapshot_id, convert_to_mpeg=convert_to_mpeg, request_options=request_options
) as r:
yield from r.data |
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
Successful Response
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
def stream_archive(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.Iterator[bytes]:
"""
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
Streaming archive data
"""
with self._raw_client.stream_archive(project_id, project_snapshot_id, request_options=request_options) as r:
yield from r.data |
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[bytes]
Streaming archive data
| stream_archive | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
async def list(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ProjectSnapshotsResponse:
"""
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotsResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.list(project_id, request_options=request_options)
return _response.data |
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotsResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.snapshots.list(
project_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
async def get(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> ProjectSnapshotExtendedResponseModel:
"""
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotExtendedResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
project_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.get(project_id, project_snapshot_id, request_options=request_options)
return _response.data |
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
ProjectSnapshotExtendedResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.studio.projects.snapshots.get(
project_id="21m00Tcm4TlvDq8ikWAM",
project_snapshot_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
async def stream(
self,
project_id: str,
project_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.AsyncIterator[bytes]:
"""
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
Successful Response
"""
async with self._raw_client.stream(
project_id, project_snapshot_id, convert_to_mpeg=convert_to_mpeg, request_options=request_options
) as r:
async for _chunk in r.data:
yield _chunk |
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
Successful Response
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
async def stream_archive(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.AsyncIterator[bytes]:
"""
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
Streaming archive data
"""
async with self._raw_client.stream_archive(
project_id, project_snapshot_id, request_options=request_options
) as r:
async for _chunk in r.data:
yield _chunk |
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[bytes]
Streaming archive data
| stream_archive | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/client.py | MIT |
def list(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[ProjectSnapshotsResponse]:
"""
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ProjectSnapshotsResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ProjectSnapshotsResponse,
construct_type(
type_=ProjectSnapshotsResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ProjectSnapshotsResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
def get(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[ProjectSnapshotExtendedResponseModel]:
"""
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ProjectSnapshotExtendedResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots/{jsonable_encoder(project_snapshot_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ProjectSnapshotExtendedResponseModel,
construct_type(
type_=ProjectSnapshotExtendedResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[ProjectSnapshotExtendedResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
def stream(
self,
project_id: str,
project_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.Iterator[HttpResponse[typing.Iterator[bytes]]]:
"""
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
Successful Response
"""
with self._client_wrapper.httpx_client.stream(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots/{jsonable_encoder(project_snapshot_id)}/stream",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"convert_to_mpeg": convert_to_mpeg,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
) as _response:
def _stream() -> HttpResponse[typing.Iterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return HttpResponse(
response=_response, data=(_chunk for _chunk in _response.iter_bytes(chunk_size=_chunk_size))
)
_response.read()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield _stream() |
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
Successful Response
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
def stream_archive(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.Iterator[HttpResponse[typing.Iterator[bytes]]]:
"""
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
Streaming archive data
"""
with self._client_wrapper.httpx_client.stream(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots/{jsonable_encoder(project_snapshot_id)}/archive",
base_url=self._client_wrapper.get_environment().base,
method="POST",
request_options=request_options,
) as _response:
def _stream() -> HttpResponse[typing.Iterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return HttpResponse(
response=_response, data=(_chunk for _chunk in _response.iter_bytes(chunk_size=_chunk_size))
)
_response.read()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield _stream() |
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.Iterator[HttpResponse[typing.Iterator[bytes]]]
Streaming archive data
| stream_archive | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
async def list(
self, project_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[ProjectSnapshotsResponse]:
"""
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ProjectSnapshotsResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ProjectSnapshotsResponse,
construct_type(
type_=ProjectSnapshotsResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Retrieves a list of snapshots for a Studio project.
Parameters
----------
project_id : str
The ID of the Studio project.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ProjectSnapshotsResponse]
Successful Response
| list | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
async def get(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[ProjectSnapshotExtendedResponseModel]:
"""
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ProjectSnapshotExtendedResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots/{jsonable_encoder(project_snapshot_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
ProjectSnapshotExtendedResponseModel,
construct_type(
type_=ProjectSnapshotExtendedResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns the project snapshot.
Parameters
----------
project_id : str
The ID of the Studio project.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[ProjectSnapshotExtendedResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
async def stream(
self,
project_id: str,
project_snapshot_id: str,
*,
convert_to_mpeg: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]:
"""
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
Successful Response
"""
async with self._client_wrapper.httpx_client.stream(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots/{jsonable_encoder(project_snapshot_id)}/stream",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"convert_to_mpeg": convert_to_mpeg,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
) as _response:
async def _stream() -> AsyncHttpResponse[typing.AsyncIterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return AsyncHttpResponse(
response=_response,
data=(_chunk async for _chunk in _response.aiter_bytes(chunk_size=_chunk_size)),
)
await _response.aread()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield await _stream() |
Stream the audio from a Studio project snapshot.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
convert_to_mpeg : typing.Optional[bool]
Whether to convert the audio to mpeg format.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
Successful Response
| stream | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
async def stream_archive(
self, project_id: str, project_snapshot_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]:
"""
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
Streaming archive data
"""
async with self._client_wrapper.httpx_client.stream(
f"v1/studio/projects/{jsonable_encoder(project_id)}/snapshots/{jsonable_encoder(project_snapshot_id)}/archive",
base_url=self._client_wrapper.get_environment().base,
method="POST",
request_options=request_options,
) as _response:
async def _stream() -> AsyncHttpResponse[typing.AsyncIterator[bytes]]:
try:
if 200 <= _response.status_code < 300:
_chunk_size = request_options.get("chunk_size", 1024) if request_options is not None else 1024
return AsyncHttpResponse(
response=_response,
data=(_chunk async for _chunk in _response.aiter_bytes(chunk_size=_chunk_size)),
)
await _response.aread()
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(
status_code=_response.status_code, headers=dict(_response.headers), body=_response.text
)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json)
yield await _stream() |
Returns a compressed archive of the Studio project's audio.
Parameters
----------
project_id : str
The ID of the project to be used. You can use the [List projects](/docs/api-reference/studio/get-projects) endpoint to list all the available projects.
project_snapshot_id : str
The ID of the Studio project snapshot.
request_options : typing.Optional[RequestOptions]
Request-specific configuration. You can pass in configuration such as `chunk_size`, and more to customize the request and response.
Returns
-------
typing.AsyncIterator[AsyncHttpResponse[typing.AsyncIterator[bytes]]]
Streaming archive data
| stream_archive | python | elevenlabs/elevenlabs-python | src/elevenlabs/studio/projects/snapshots/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/studio/projects/snapshots/raw_client.py | MIT |
def create_previews(
self,
*,
voice_description: str,
output_format: typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat] = None,
text: typing.Optional[str] = OMIT,
auto_generate_text: typing.Optional[bool] = OMIT,
loudness: typing.Optional[float] = OMIT,
quality: typing.Optional[float] = OMIT,
seed: typing.Optional[int] = OMIT,
guidance_scale: typing.Optional[float] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> VoiceDesignPreviewResponse:
"""
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
VoiceDesignPreviewResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.text_to_voice.create_previews(
voice_description="A sassy squeaky mouse",
)
"""
_response = self._raw_client.create_previews(
voice_description=voice_description,
output_format=output_format,
text=text,
auto_generate_text=auto_generate_text,
loudness=loudness,
quality=quality,
seed=seed,
guidance_scale=guidance_scale,
request_options=request_options,
)
return _response.data |
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
VoiceDesignPreviewResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.text_to_voice.create_previews(
voice_description="A sassy squeaky mouse",
)
| create_previews | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/client.py | MIT |
def create_voice_from_preview(
self,
*,
voice_name: str,
voice_description: str,
generated_voice_id: str,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
played_not_selected_voice_ids: typing.Optional[typing.Sequence[str]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> Voice:
"""
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.text_to_voice.create_voice_from_preview(
voice_name="Sassy squeaky mouse",
voice_description="A sassy squeaky mouse",
generated_voice_id="37HceQefKmEi3bGovXjL",
)
"""
_response = self._raw_client.create_voice_from_preview(
voice_name=voice_name,
voice_description=voice_description,
generated_voice_id=generated_voice_id,
labels=labels,
played_not_selected_voice_ids=played_not_selected_voice_ids,
request_options=request_options,
)
return _response.data |
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.text_to_voice.create_voice_from_preview(
voice_name="Sassy squeaky mouse",
voice_description="A sassy squeaky mouse",
generated_voice_id="37HceQefKmEi3bGovXjL",
)
| create_voice_from_preview | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/client.py | MIT |
async def create_previews(
self,
*,
voice_description: str,
output_format: typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat] = None,
text: typing.Optional[str] = OMIT,
auto_generate_text: typing.Optional[bool] = OMIT,
loudness: typing.Optional[float] = OMIT,
quality: typing.Optional[float] = OMIT,
seed: typing.Optional[int] = OMIT,
guidance_scale: typing.Optional[float] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> VoiceDesignPreviewResponse:
"""
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
VoiceDesignPreviewResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.text_to_voice.create_previews(
voice_description="A sassy squeaky mouse",
)
asyncio.run(main())
"""
_response = await self._raw_client.create_previews(
voice_description=voice_description,
output_format=output_format,
text=text,
auto_generate_text=auto_generate_text,
loudness=loudness,
quality=quality,
seed=seed,
guidance_scale=guidance_scale,
request_options=request_options,
)
return _response.data |
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
VoiceDesignPreviewResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.text_to_voice.create_previews(
voice_description="A sassy squeaky mouse",
)
asyncio.run(main())
| create_previews | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/client.py | MIT |
async def create_voice_from_preview(
self,
*,
voice_name: str,
voice_description: str,
generated_voice_id: str,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
played_not_selected_voice_ids: typing.Optional[typing.Sequence[str]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> Voice:
"""
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.text_to_voice.create_voice_from_preview(
voice_name="Sassy squeaky mouse",
voice_description="A sassy squeaky mouse",
generated_voice_id="37HceQefKmEi3bGovXjL",
)
asyncio.run(main())
"""
_response = await self._raw_client.create_voice_from_preview(
voice_name=voice_name,
voice_description=voice_description,
generated_voice_id=generated_voice_id,
labels=labels,
played_not_selected_voice_ids=played_not_selected_voice_ids,
request_options=request_options,
)
return _response.data |
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.text_to_voice.create_voice_from_preview(
voice_name="Sassy squeaky mouse",
voice_description="A sassy squeaky mouse",
generated_voice_id="37HceQefKmEi3bGovXjL",
)
asyncio.run(main())
| create_voice_from_preview | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/client.py | MIT |
def create_previews(
self,
*,
voice_description: str,
output_format: typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat] = None,
text: typing.Optional[str] = OMIT,
auto_generate_text: typing.Optional[bool] = OMIT,
loudness: typing.Optional[float] = OMIT,
quality: typing.Optional[float] = OMIT,
seed: typing.Optional[int] = OMIT,
guidance_scale: typing.Optional[float] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[VoiceDesignPreviewResponse]:
"""
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[VoiceDesignPreviewResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/text-to-voice/create-previews",
base_url=self._client_wrapper.get_environment().base,
method="POST",
params={
"output_format": output_format,
},
json={
"voice_description": voice_description,
"text": text,
"auto_generate_text": auto_generate_text,
"loudness": loudness,
"quality": quality,
"seed": seed,
"guidance_scale": guidance_scale,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
VoiceDesignPreviewResponse,
construct_type(
type_=VoiceDesignPreviewResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[VoiceDesignPreviewResponse]
Successful Response
| create_previews | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/raw_client.py | MIT |
def create_voice_from_preview(
self,
*,
voice_name: str,
voice_description: str,
generated_voice_id: str,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
played_not_selected_voice_ids: typing.Optional[typing.Sequence[str]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[Voice]:
"""
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[Voice]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/text-to-voice/create-voice-from-preview",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"voice_name": voice_name,
"voice_description": voice_description,
"generated_voice_id": generated_voice_id,
"labels": labels,
"played_not_selected_voice_ids": played_not_selected_voice_ids,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
Voice,
construct_type(
type_=Voice, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[Voice]
Successful Response
| create_voice_from_preview | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/raw_client.py | MIT |
async def create_previews(
self,
*,
voice_description: str,
output_format: typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat] = None,
text: typing.Optional[str] = OMIT,
auto_generate_text: typing.Optional[bool] = OMIT,
loudness: typing.Optional[float] = OMIT,
quality: typing.Optional[float] = OMIT,
seed: typing.Optional[int] = OMIT,
guidance_scale: typing.Optional[float] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[VoiceDesignPreviewResponse]:
"""
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[VoiceDesignPreviewResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/text-to-voice/create-previews",
base_url=self._client_wrapper.get_environment().base,
method="POST",
params={
"output_format": output_format,
},
json={
"voice_description": voice_description,
"text": text,
"auto_generate_text": auto_generate_text,
"loudness": loudness,
"quality": quality,
"seed": seed,
"guidance_scale": guidance_scale,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
VoiceDesignPreviewResponse,
construct_type(
type_=VoiceDesignPreviewResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create a voice from a text prompt.
Parameters
----------
voice_description : str
Description to use for the created voice.
output_format : typing.Optional[TextToVoiceCreatePreviewsRequestOutputFormat]
The output format of the generated audio.
text : typing.Optional[str]
Text to generate, text length has to be between 100 and 1000.
auto_generate_text : typing.Optional[bool]
Whether to automatically generate a text suitable for the voice description.
loudness : typing.Optional[float]
Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS.
quality : typing.Optional[float]
Higher quality results in better voice output but less variety.
seed : typing.Optional[int]
Random number that controls the voice generation. Same seed with same inputs produces same voice.
guidance_scale : typing.Optional[float]
Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[VoiceDesignPreviewResponse]
Successful Response
| create_previews | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/raw_client.py | MIT |
async def create_voice_from_preview(
self,
*,
voice_name: str,
voice_description: str,
generated_voice_id: str,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
played_not_selected_voice_ids: typing.Optional[typing.Sequence[str]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[Voice]:
"""
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[Voice]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/text-to-voice/create-voice-from-preview",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"voice_name": voice_name,
"voice_description": voice_description,
"generated_voice_id": generated_voice_id,
"labels": labels,
"played_not_selected_voice_ids": played_not_selected_voice_ids,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
Voice,
construct_type(
type_=Voice, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Add a generated voice to the voice library.
Parameters
----------
voice_name : str
Name to use for the created voice.
voice_description : str
Description to use for the created voice.
generated_voice_id : str
The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Optional, metadata to add to the created voice. Defaults to None.
played_not_selected_voice_ids : typing.Optional[typing.Sequence[str]]
List of voice ids that the user has played but not selected. Used for RLHF.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[Voice]
Successful Response
| create_voice_from_preview | python | elevenlabs/elevenlabs-python | src/elevenlabs/text_to_voice/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/text_to_voice/raw_client.py | MIT |
def get(
self,
*,
start_unix: int,
end_unix: int,
include_workspace_metrics: typing.Optional[bool] = None,
breakdown_type: typing.Optional[BreakdownTypes] = None,
aggregation_interval: typing.Optional[UsageAggregationInterval] = None,
metric: typing.Optional[MetricType] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> UsageCharactersResponseModel:
"""
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
UsageCharactersResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.usage.get(
start_unix=1,
end_unix=1,
)
"""
_response = self._raw_client.get(
start_unix=start_unix,
end_unix=end_unix,
include_workspace_metrics=include_workspace_metrics,
breakdown_type=breakdown_type,
aggregation_interval=aggregation_interval,
metric=metric,
request_options=request_options,
)
return _response.data |
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
UsageCharactersResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.usage.get(
start_unix=1,
end_unix=1,
)
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/usage/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/usage/client.py | MIT |
async def get(
self,
*,
start_unix: int,
end_unix: int,
include_workspace_metrics: typing.Optional[bool] = None,
breakdown_type: typing.Optional[BreakdownTypes] = None,
aggregation_interval: typing.Optional[UsageAggregationInterval] = None,
metric: typing.Optional[MetricType] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> UsageCharactersResponseModel:
"""
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
UsageCharactersResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.usage.get(
start_unix=1,
end_unix=1,
)
asyncio.run(main())
"""
_response = await self._raw_client.get(
start_unix=start_unix,
end_unix=end_unix,
include_workspace_metrics=include_workspace_metrics,
breakdown_type=breakdown_type,
aggregation_interval=aggregation_interval,
metric=metric,
request_options=request_options,
)
return _response.data |
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
UsageCharactersResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.usage.get(
start_unix=1,
end_unix=1,
)
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/usage/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/usage/client.py | MIT |
def get(
self,
*,
start_unix: int,
end_unix: int,
include_workspace_metrics: typing.Optional[bool] = None,
breakdown_type: typing.Optional[BreakdownTypes] = None,
aggregation_interval: typing.Optional[UsageAggregationInterval] = None,
metric: typing.Optional[MetricType] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[UsageCharactersResponseModel]:
"""
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[UsageCharactersResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/usage/character-stats",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"start_unix": start_unix,
"end_unix": end_unix,
"include_workspace_metrics": include_workspace_metrics,
"breakdown_type": breakdown_type,
"aggregation_interval": aggregation_interval,
"metric": metric,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
UsageCharactersResponseModel,
construct_type(
type_=UsageCharactersResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[UsageCharactersResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/usage/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/usage/raw_client.py | MIT |
async def get(
self,
*,
start_unix: int,
end_unix: int,
include_workspace_metrics: typing.Optional[bool] = None,
breakdown_type: typing.Optional[BreakdownTypes] = None,
aggregation_interval: typing.Optional[UsageAggregationInterval] = None,
metric: typing.Optional[MetricType] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[UsageCharactersResponseModel]:
"""
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[UsageCharactersResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/usage/character-stats",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"start_unix": start_unix,
"end_unix": end_unix,
"include_workspace_metrics": include_workspace_metrics,
"breakdown_type": breakdown_type,
"aggregation_interval": aggregation_interval,
"metric": metric,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
UsageCharactersResponseModel,
construct_type(
type_=UsageCharactersResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns the usage metrics for the current user or the entire workspace they are part of. The response provides a time axis based on the specified aggregation interval (default: day), with usage values for each interval along that axis. Usage is broken down by the selected breakdown type. For example, breakdown type "voice" will return the usage of each voice for each interval along the time axis.
Parameters
----------
start_unix : int
UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day.
end_unix : int
UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.
include_workspace_metrics : typing.Optional[bool]
Whether or not to include the statistics of the entire workspace.
breakdown_type : typing.Optional[BreakdownTypes]
How to break down the information. Cannot be "user" if include_workspace_metrics is False.
aggregation_interval : typing.Optional[UsageAggregationInterval]
How to aggregate usage data over time. Can be "hour", "day", "week", "month", or "cumulative".
metric : typing.Optional[MetricType]
Which metric to aggregate.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[UsageCharactersResponseModel]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/usage/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/usage/raw_client.py | MIT |
async def get(self, *, request_options: typing.Optional[RequestOptions] = None) -> User:
"""
Gets information about the user
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
User
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.user.get()
asyncio.run(main())
"""
_response = await self._raw_client.get(request_options=request_options)
return _response.data |
Gets information about the user
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
User
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.user.get()
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/user/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/user/client.py | MIT |
def get(self, *, request_options: typing.Optional[RequestOptions] = None) -> HttpResponse[User]:
"""
Gets information about the user
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[User]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/user",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
User,
construct_type(
type_=User, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets information about the user
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[User]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/user/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/user/raw_client.py | MIT |
async def get(self, *, request_options: typing.Optional[RequestOptions] = None) -> AsyncHttpResponse[User]:
"""
Gets information about the user
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[User]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/user",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
User,
construct_type(
type_=User, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets information about the user
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[User]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/user/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/user/raw_client.py | MIT |
async def get(self, *, request_options: typing.Optional[RequestOptions] = None) -> Subscription:
"""
Gets extended information about the users subscription
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Subscription
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.user.subscription.get()
asyncio.run(main())
"""
_response = await self._raw_client.get(request_options=request_options)
return _response.data |
Gets extended information about the users subscription
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Subscription
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.user.subscription.get()
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/user/subscription/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/user/subscription/client.py | MIT |
def get(self, *, request_options: typing.Optional[RequestOptions] = None) -> HttpResponse[Subscription]:
"""
Gets extended information about the users subscription
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[Subscription]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/user/subscription",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
Subscription,
construct_type(
type_=Subscription, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets extended information about the users subscription
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[Subscription]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/user/subscription/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/user/subscription/raw_client.py | MIT |
async def get(self, *, request_options: typing.Optional[RequestOptions] = None) -> AsyncHttpResponse[Subscription]:
"""
Gets extended information about the users subscription
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[Subscription]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/user/subscription",
base_url=self._client_wrapper.get_environment().base,
method="GET",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
Subscription,
construct_type(
type_=Subscription, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets extended information about the users subscription
Parameters
----------
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[Subscription]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/user/subscription/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/user/subscription/raw_client.py | MIT |
def get_all(
self, *, show_legacy: typing.Optional[bool] = None, request_options: typing.Optional[RequestOptions] = None
) -> GetVoicesResponse:
"""
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.get_all()
"""
_response = self._raw_client.get_all(show_legacy=show_legacy, request_options=request_options)
return _response.data |
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.get_all()
| get_all | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
def search(
self,
*,
next_page_token: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
search: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
sort_direction: typing.Optional[str] = None,
voice_type: typing.Optional[str] = None,
category: typing.Optional[str] = None,
fine_tuning_state: typing.Optional[str] = None,
collection_id: typing.Optional[str] = None,
include_total_count: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> GetVoicesV2Response:
"""
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesV2Response
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.search(
include_total_count=True,
)
"""
_response = self._raw_client.search(
next_page_token=next_page_token,
page_size=page_size,
search=search,
sort=sort,
sort_direction=sort_direction,
voice_type=voice_type,
category=category,
fine_tuning_state=fine_tuning_state,
collection_id=collection_id,
include_total_count=include_total_count,
request_options=request_options,
)
return _response.data |
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesV2Response
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.search(
include_total_count=True,
)
| search | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
def get(
self,
voice_id: str,
*,
with_settings: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> Voice:
"""
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.get(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.get(voice_id, with_settings=with_settings, request_options=request_options)
return _response.data |
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.get(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
def delete(
self, voice_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteVoiceResponseModel:
"""
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.delete(voice_id, request_options=request_options)
return _response.data |
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
def update(
self,
voice_id: str,
*,
name: str,
files: typing.Optional[typing.List[core.File]] = OMIT,
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditVoiceResponseModel:
"""
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
name="name",
)
"""
_response = self._raw_client.update(
voice_id,
name=name,
files=files,
remove_background_noise=remove_background_noise,
description=description,
labels=labels,
request_options=request_options,
)
return _response.data |
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
name="name",
)
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
def get_shared(
self,
*,
page_size: typing.Optional[int] = None,
category: typing.Optional[VoicesGetSharedRequestCategory] = None,
gender: typing.Optional[str] = None,
age: typing.Optional[str] = None,
accent: typing.Optional[str] = None,
language: typing.Optional[str] = None,
locale: typing.Optional[str] = None,
search: typing.Optional[str] = None,
use_cases: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
descriptives: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
featured: typing.Optional[bool] = None,
min_notice_period_days: typing.Optional[int] = None,
include_custom_rates: typing.Optional[bool] = None,
include_live_moderated: typing.Optional[bool] = None,
reader_app_enabled: typing.Optional[bool] = None,
owner_id: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
page: typing.Optional[int] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> GetLibraryVoicesResponse:
"""
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.get_shared(
featured=True,
reader_app_enabled=True,
)
"""
_response = self._raw_client.get_shared(
page_size=page_size,
category=category,
gender=gender,
age=age,
accent=accent,
language=language,
locale=locale,
search=search,
use_cases=use_cases,
descriptives=descriptives,
featured=featured,
min_notice_period_days=min_notice_period_days,
include_custom_rates=include_custom_rates,
include_live_moderated=include_live_moderated,
reader_app_enabled=reader_app_enabled,
owner_id=owner_id,
sort=sort,
page=page,
request_options=request_options,
)
return _response.data |
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.get_shared(
featured=True,
reader_app_enabled=True,
)
| get_shared | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
def find_similar_voices(
self,
*,
audio_file: typing.Optional[core.File] = OMIT,
similarity_threshold: typing.Optional[float] = OMIT,
top_k: typing.Optional[int] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> GetLibraryVoicesResponse:
"""
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.find_similar_voices()
"""
_response = self._raw_client.find_similar_voices(
audio_file=audio_file,
similarity_threshold=similarity_threshold,
top_k=top_k,
request_options=request_options,
)
return _response.data |
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.find_similar_voices()
| find_similar_voices | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
async def get_all(
self, *, show_legacy: typing.Optional[bool] = None, request_options: typing.Optional[RequestOptions] = None
) -> GetVoicesResponse:
"""
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.get_all()
asyncio.run(main())
"""
_response = await self._raw_client.get_all(show_legacy=show_legacy, request_options=request_options)
return _response.data |
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.get_all()
asyncio.run(main())
| get_all | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
async def search(
self,
*,
next_page_token: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
search: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
sort_direction: typing.Optional[str] = None,
voice_type: typing.Optional[str] = None,
category: typing.Optional[str] = None,
fine_tuning_state: typing.Optional[str] = None,
collection_id: typing.Optional[str] = None,
include_total_count: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> GetVoicesV2Response:
"""
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesV2Response
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.search(
include_total_count=True,
)
asyncio.run(main())
"""
_response = await self._raw_client.search(
next_page_token=next_page_token,
page_size=page_size,
search=search,
sort=sort,
sort_direction=sort_direction,
voice_type=voice_type,
category=category,
fine_tuning_state=fine_tuning_state,
collection_id=collection_id,
include_total_count=include_total_count,
request_options=request_options,
)
return _response.data |
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetVoicesV2Response
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.search(
include_total_count=True,
)
asyncio.run(main())
| search | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
async def get(
self,
voice_id: str,
*,
with_settings: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> Voice:
"""
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.get(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.get(voice_id, with_settings=with_settings, request_options=request_options)
return _response.data |
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
Voice
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.get(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
async def delete(
self, voice_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> DeleteVoiceResponseModel:
"""
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteVoiceResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.delete(voice_id, request_options=request_options)
return _response.data |
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
DeleteVoiceResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.delete(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
async def update(
self,
voice_id: str,
*,
name: str,
files: typing.Optional[typing.List[core.File]] = OMIT,
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> EditVoiceResponseModel:
"""
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditVoiceResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
name="name",
)
asyncio.run(main())
"""
_response = await self._raw_client.update(
voice_id,
name=name,
files=files,
remove_background_noise=remove_background_noise,
description=description,
labels=labels,
request_options=request_options,
)
return _response.data |
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
EditVoiceResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
name="name",
)
asyncio.run(main())
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
async def get_shared(
self,
*,
page_size: typing.Optional[int] = None,
category: typing.Optional[VoicesGetSharedRequestCategory] = None,
gender: typing.Optional[str] = None,
age: typing.Optional[str] = None,
accent: typing.Optional[str] = None,
language: typing.Optional[str] = None,
locale: typing.Optional[str] = None,
search: typing.Optional[str] = None,
use_cases: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
descriptives: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
featured: typing.Optional[bool] = None,
min_notice_period_days: typing.Optional[int] = None,
include_custom_rates: typing.Optional[bool] = None,
include_live_moderated: typing.Optional[bool] = None,
reader_app_enabled: typing.Optional[bool] = None,
owner_id: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
page: typing.Optional[int] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> GetLibraryVoicesResponse:
"""
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.get_shared(
featured=True,
reader_app_enabled=True,
)
asyncio.run(main())
"""
_response = await self._raw_client.get_shared(
page_size=page_size,
category=category,
gender=gender,
age=age,
accent=accent,
language=language,
locale=locale,
search=search,
use_cases=use_cases,
descriptives=descriptives,
featured=featured,
min_notice_period_days=min_notice_period_days,
include_custom_rates=include_custom_rates,
include_live_moderated=include_live_moderated,
reader_app_enabled=reader_app_enabled,
owner_id=owner_id,
sort=sort,
page=page,
request_options=request_options,
)
return _response.data |
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.get_shared(
featured=True,
reader_app_enabled=True,
)
asyncio.run(main())
| get_shared | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
async def find_similar_voices(
self,
*,
audio_file: typing.Optional[core.File] = OMIT,
similarity_threshold: typing.Optional[float] = OMIT,
top_k: typing.Optional[int] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> GetLibraryVoicesResponse:
"""
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.find_similar_voices()
asyncio.run(main())
"""
_response = await self._raw_client.find_similar_voices(
audio_file=audio_file,
similarity_threshold=similarity_threshold,
top_k=top_k,
request_options=request_options,
)
return _response.data |
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
GetLibraryVoicesResponse
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.find_similar_voices()
asyncio.run(main())
| find_similar_voices | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/client.py | MIT |
def get_all(
self, *, show_legacy: typing.Optional[bool] = None, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[GetVoicesResponse]:
"""
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetVoicesResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/voices",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"show_legacy": show_legacy,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetVoicesResponse,
construct_type(
type_=GetVoicesResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetVoicesResponse]
Successful Response
| get_all | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def search(
self,
*,
next_page_token: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
search: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
sort_direction: typing.Optional[str] = None,
voice_type: typing.Optional[str] = None,
category: typing.Optional[str] = None,
fine_tuning_state: typing.Optional[str] = None,
collection_id: typing.Optional[str] = None,
include_total_count: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[GetVoicesV2Response]:
"""
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetVoicesV2Response]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v2/voices",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"next_page_token": next_page_token,
"page_size": page_size,
"search": search,
"sort": sort,
"sort_direction": sort_direction,
"voice_type": voice_type,
"category": category,
"fine_tuning_state": fine_tuning_state,
"collection_id": collection_id,
"include_total_count": include_total_count,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetVoicesV2Response,
construct_type(
type_=GetVoicesV2Response, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetVoicesV2Response]
Successful Response
| search | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def get(
self,
voice_id: str,
*,
with_settings: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[Voice]:
"""
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[Voice]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"with_settings": with_settings,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
Voice,
construct_type(
type_=Voice, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[Voice]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def delete(
self, voice_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> HttpResponse[DeleteVoiceResponseModel]:
"""
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteVoiceResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteVoiceResponseModel,
construct_type(
type_=DeleteVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[DeleteVoiceResponseModel]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def update(
self,
voice_id: str,
*,
name: str,
files: typing.Optional[typing.List[core.File]] = OMIT,
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[EditVoiceResponseModel]:
"""
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditVoiceResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}/edit",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"remove_background_noise": remove_background_noise,
"description": description,
"labels": labels,
},
files={
**({"files": files} if files is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditVoiceResponseModel,
construct_type(
type_=EditVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[EditVoiceResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def share(
self,
public_user_id: str,
voice_id: str,
*,
new_name: str,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddVoiceResponseModel]:
"""
Add a shared voice to your collection of Voices
Parameters
----------
public_user_id : str
Public user ID used to publicly identify ElevenLabs users.
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
new_name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/voices/add/{jsonable_encoder(public_user_id)}/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"new_name": new_name,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceResponseModel,
construct_type(
type_=AddVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Add a shared voice to your collection of Voices
Parameters
----------
public_user_id : str
Public user ID used to publicly identify ElevenLabs users.
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
new_name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceResponseModel]
Successful Response
| share | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def get_shared(
self,
*,
page_size: typing.Optional[int] = None,
category: typing.Optional[VoicesGetSharedRequestCategory] = None,
gender: typing.Optional[str] = None,
age: typing.Optional[str] = None,
accent: typing.Optional[str] = None,
language: typing.Optional[str] = None,
locale: typing.Optional[str] = None,
search: typing.Optional[str] = None,
use_cases: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
descriptives: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
featured: typing.Optional[bool] = None,
min_notice_period_days: typing.Optional[int] = None,
include_custom_rates: typing.Optional[bool] = None,
include_live_moderated: typing.Optional[bool] = None,
reader_app_enabled: typing.Optional[bool] = None,
owner_id: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
page: typing.Optional[int] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[GetLibraryVoicesResponse]:
"""
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetLibraryVoicesResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/shared-voices",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"page_size": page_size,
"category": category,
"gender": gender,
"age": age,
"accent": accent,
"language": language,
"locale": locale,
"search": search,
"use_cases": use_cases,
"descriptives": descriptives,
"featured": featured,
"min_notice_period_days": min_notice_period_days,
"include_custom_rates": include_custom_rates,
"include_live_moderated": include_live_moderated,
"reader_app_enabled": reader_app_enabled,
"owner_id": owner_id,
"sort": sort,
"page": page,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetLibraryVoicesResponse,
construct_type(
type_=GetLibraryVoicesResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetLibraryVoicesResponse]
Successful Response
| get_shared | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def find_similar_voices(
self,
*,
audio_file: typing.Optional[core.File] = OMIT,
similarity_threshold: typing.Optional[float] = OMIT,
top_k: typing.Optional[int] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[GetLibraryVoicesResponse]:
"""
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetLibraryVoicesResponse]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/similar-voices",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"similarity_threshold": similarity_threshold,
"top_k": top_k,
},
files={
**({"audio_file": audio_file} if audio_file is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetLibraryVoicesResponse,
construct_type(
type_=GetLibraryVoicesResponse, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[GetLibraryVoicesResponse]
Successful Response
| find_similar_voices | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def get_all(
self, *, show_legacy: typing.Optional[bool] = None, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[GetVoicesResponse]:
"""
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetVoicesResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/voices",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"show_legacy": show_legacy,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetVoicesResponse,
construct_type(
type_=GetVoicesResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of all available voices for a user.
Parameters
----------
show_legacy : typing.Optional[bool]
If set to true, legacy premade voices will be included in responses from /v1/voices
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetVoicesResponse]
Successful Response
| get_all | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def search(
self,
*,
next_page_token: typing.Optional[str] = None,
page_size: typing.Optional[int] = None,
search: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
sort_direction: typing.Optional[str] = None,
voice_type: typing.Optional[str] = None,
category: typing.Optional[str] = None,
fine_tuning_state: typing.Optional[str] = None,
collection_id: typing.Optional[str] = None,
include_total_count: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[GetVoicesV2Response]:
"""
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetVoicesV2Response]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v2/voices",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"next_page_token": next_page_token,
"page_size": page_size,
"search": search,
"sort": sort,
"sort_direction": sort_direction,
"voice_type": voice_type,
"category": category,
"fine_tuning_state": fine_tuning_state,
"collection_id": collection_id,
"include_total_count": include_total_count,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetVoicesV2Response,
construct_type(
type_=GetVoicesV2Response, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Gets a list of all available voices for a user with search, filtering and pagination.
Parameters
----------
next_page_token : typing.Optional[str]
The next page token to use for pagination. Returned from the previous request.
page_size : typing.Optional[int]
How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included.
search : typing.Optional[str]
Search term to filter voices by. Searches in name, description, labels, category.
sort : typing.Optional[str]
Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices.
sort_direction : typing.Optional[str]
Which direction to sort the voices in. 'asc' or 'desc'.
voice_type : typing.Optional[str]
Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default'. 'non-default' is equal to 'personal' plus 'community'.
category : typing.Optional[str]
Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional'
fine_tuning_state : typing.Optional[str]
State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed'
collection_id : typing.Optional[str]
Collection ID to filter voices by.
include_total_count : typing.Optional[bool]
Whether to include the total count of voices found in the response. Incurs a performance cost.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetVoicesV2Response]
Successful Response
| search | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def get(
self,
voice_id: str,
*,
with_settings: typing.Optional[bool] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[Voice]:
"""
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[Voice]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"with_settings": with_settings,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
Voice,
construct_type(
type_=Voice, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns metadata about a specific voice.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
with_settings : typing.Optional[bool]
This parameter is now deprecated. It is ignored and will be removed in a future version.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[Voice]
Successful Response
| get | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def delete(
self, voice_id: str, *, request_options: typing.Optional[RequestOptions] = None
) -> AsyncHttpResponse[DeleteVoiceResponseModel]:
"""
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteVoiceResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="DELETE",
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
DeleteVoiceResponseModel,
construct_type(
type_=DeleteVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Deletes a voice by its ID.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[DeleteVoiceResponseModel]
Successful Response
| delete | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def update(
self,
voice_id: str,
*,
name: str,
files: typing.Optional[typing.List[core.File]] = OMIT,
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[EditVoiceResponseModel]:
"""
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditVoiceResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/voices/{jsonable_encoder(voice_id)}/edit",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"remove_background_noise": remove_background_noise,
"description": description,
"labels": labels,
},
files={
**({"files": files} if files is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
EditVoiceResponseModel,
construct_type(
type_=EditVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Edit a voice created by you.
Parameters
----------
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.Optional[typing.List[core.File]]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[EditVoiceResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def share(
self,
public_user_id: str,
voice_id: str,
*,
new_name: str,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddVoiceResponseModel]:
"""
Add a shared voice to your collection of Voices
Parameters
----------
public_user_id : str
Public user ID used to publicly identify ElevenLabs users.
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
new_name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/voices/add/{jsonable_encoder(public_user_id)}/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"new_name": new_name,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceResponseModel,
construct_type(
type_=AddVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Add a shared voice to your collection of Voices
Parameters
----------
public_user_id : str
Public user ID used to publicly identify ElevenLabs users.
voice_id : str
ID of the voice to be used. You can use the [Get voices](/docs/api-reference/voices/search) endpoint list all the available voices.
new_name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceResponseModel]
Successful Response
| share | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def get_shared(
self,
*,
page_size: typing.Optional[int] = None,
category: typing.Optional[VoicesGetSharedRequestCategory] = None,
gender: typing.Optional[str] = None,
age: typing.Optional[str] = None,
accent: typing.Optional[str] = None,
language: typing.Optional[str] = None,
locale: typing.Optional[str] = None,
search: typing.Optional[str] = None,
use_cases: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
descriptives: typing.Optional[typing.Union[str, typing.Sequence[str]]] = None,
featured: typing.Optional[bool] = None,
min_notice_period_days: typing.Optional[int] = None,
include_custom_rates: typing.Optional[bool] = None,
include_live_moderated: typing.Optional[bool] = None,
reader_app_enabled: typing.Optional[bool] = None,
owner_id: typing.Optional[str] = None,
sort: typing.Optional[str] = None,
page: typing.Optional[int] = None,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[GetLibraryVoicesResponse]:
"""
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetLibraryVoicesResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/shared-voices",
base_url=self._client_wrapper.get_environment().base,
method="GET",
params={
"page_size": page_size,
"category": category,
"gender": gender,
"age": age,
"accent": accent,
"language": language,
"locale": locale,
"search": search,
"use_cases": use_cases,
"descriptives": descriptives,
"featured": featured,
"min_notice_period_days": min_notice_period_days,
"include_custom_rates": include_custom_rates,
"include_live_moderated": include_live_moderated,
"reader_app_enabled": reader_app_enabled,
"owner_id": owner_id,
"sort": sort,
"page": page,
},
request_options=request_options,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetLibraryVoicesResponse,
construct_type(
type_=GetLibraryVoicesResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Retrieves a list of shared voices.
Parameters
----------
page_size : typing.Optional[int]
How many shared voices to return at maximum. Can not exceed 100, defaults to 30.
category : typing.Optional[VoicesGetSharedRequestCategory]
Voice category used for filtering
gender : typing.Optional[str]
Gender used for filtering
age : typing.Optional[str]
Age used for filtering
accent : typing.Optional[str]
Accent used for filtering
language : typing.Optional[str]
Language used for filtering
locale : typing.Optional[str]
Locale used for filtering
search : typing.Optional[str]
Search term used for filtering
use_cases : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Use-case used for filtering
descriptives : typing.Optional[typing.Union[str, typing.Sequence[str]]]
Search term used for filtering
featured : typing.Optional[bool]
Filter featured voices
min_notice_period_days : typing.Optional[int]
Filter voices with a minimum notice period of the given number of days.
include_custom_rates : typing.Optional[bool]
Include/exclude voices with custom rates
include_live_moderated : typing.Optional[bool]
Include/exclude voices that are live moderated
reader_app_enabled : typing.Optional[bool]
Filter voices that are enabled for the reader app
owner_id : typing.Optional[str]
Filter voices by public owner ID
sort : typing.Optional[str]
Sort criteria
page : typing.Optional[int]
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetLibraryVoicesResponse]
Successful Response
| get_shared | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
async def find_similar_voices(
self,
*,
audio_file: typing.Optional[core.File] = OMIT,
similarity_threshold: typing.Optional[float] = OMIT,
top_k: typing.Optional[int] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[GetLibraryVoicesResponse]:
"""
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetLibraryVoicesResponse]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/similar-voices",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"similarity_threshold": similarity_threshold,
"top_k": top_k,
},
files={
**({"audio_file": audio_file} if audio_file is not None else {}),
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
GetLibraryVoicesResponse,
construct_type(
type_=GetLibraryVoicesResponse, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Returns a list of shared voices similar to the provided audio sample. If neither similarity_threshold nor top_k is provided, we will apply default values.
Parameters
----------
audio_file : typing.Optional[core.File]
See core.File for more documentation
similarity_threshold : typing.Optional[float]
Threshold for voice similarity between provided sample and library voices. Values range from 0 to 2. The smaller the value the more similar voices will be returned.
top_k : typing.Optional[int]
Number of most similar voices to return. If similarity_threshold is provided, less than this number of voices may be returned. Values range from 1 to 100.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[GetLibraryVoicesResponse]
Successful Response
| find_similar_voices | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/raw_client.py | MIT |
def create(
self,
*,
name: str,
files: typing.List[core.File],
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddVoiceIvcResponseModel:
"""
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceIvcResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.ivc.create(
name="name",
)
"""
_response = self._raw_client.create(
name=name,
files=files,
remove_background_noise=remove_background_noise,
description=description,
labels=labels,
request_options=request_options,
)
return _response.data |
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceIvcResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.ivc.create(
name="name",
)
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/ivc/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/ivc/client.py | MIT |
async def create(
self,
*,
name: str,
files: typing.List[core.File],
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddVoiceIvcResponseModel:
"""
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceIvcResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.ivc.create(
name="name",
)
asyncio.run(main())
"""
_response = await self._raw_client.create(
name=name,
files=files,
remove_background_noise=remove_background_noise,
description=description,
labels=labels,
request_options=request_options,
)
return _response.data |
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceIvcResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.ivc.create(
name="name",
)
asyncio.run(main())
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/ivc/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/ivc/client.py | MIT |
def create(
self,
*,
name: str,
files: typing.List[core.File],
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddVoiceIvcResponseModel]:
"""
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceIvcResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/voices/add",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"remove_background_noise": remove_background_noise,
"description": description,
"labels": labels,
},
files={
"files": files,
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceIvcResponseModel,
construct_type(
type_=AddVoiceIvcResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceIvcResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/ivc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/ivc/raw_client.py | MIT |
async def create(
self,
*,
name: str,
files: typing.List[core.File],
remove_background_noise: typing.Optional[bool] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddVoiceIvcResponseModel]:
"""
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceIvcResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/voices/add",
base_url=self._client_wrapper.get_environment().base,
method="POST",
data={
"name": name,
"remove_background_noise": remove_background_noise,
"description": description,
"labels": labels,
},
files={
"files": files,
},
request_options=request_options,
omit=OMIT,
force_multipart=True,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceIvcResponseModel,
construct_type(
type_=AddVoiceIvcResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Create a voice clone and add it to your Voices
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
description : typing.Optional[str]
A description of the voice.
labels : typing.Optional[str]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceIvcResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/ivc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/ivc/raw_client.py | MIT |
def update(
self,
voice_id: str,
*,
name: typing.Optional[str] = OMIT,
language: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddVoiceResponseModel:
"""
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.update(
voice_id,
name=name,
language=language,
description=description,
labels=labels,
request_options=request_options,
)
return _response.data |
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/client.py | MIT |
def train(
self,
voice_id: str,
*,
model_id: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> StartPvcVoiceTrainingResponseModel:
"""
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
StartPvcVoiceTrainingResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.train(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.train(voice_id, model_id=model_id, request_options=request_options)
return _response.data |
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
StartPvcVoiceTrainingResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.train(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
| train | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/client.py | MIT |
async def update(
self,
voice_id: str,
*,
name: typing.Optional[str] = OMIT,
language: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddVoiceResponseModel:
"""
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.pvc.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.update(
voice_id,
name=name,
language=language,
description=description,
labels=labels,
request_options=request_options,
)
return _response.data |
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.pvc.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/client.py | MIT |
async def train(
self,
voice_id: str,
*,
model_id: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> StartPvcVoiceTrainingResponseModel:
"""
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
StartPvcVoiceTrainingResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.pvc.train(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
"""
_response = await self._raw_client.train(voice_id, model_id=model_id, request_options=request_options)
return _response.data |
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
StartPvcVoiceTrainingResponseModel
Successful Response
Examples
--------
import asyncio
from elevenlabs import AsyncElevenLabs
client = AsyncElevenLabs(
api_key="YOUR_API_KEY",
)
async def main() -> None:
await client.voices.pvc.train(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
asyncio.run(main())
| train | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/client.py | MIT |
def create(
self,
*,
name: str,
language: str,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddVoiceResponseModel]:
"""
Creates a new PVC voice with metadata but no samples
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : str
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
"v1/voices/pvc",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"language": language,
"description": description,
"labels": labels,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceResponseModel,
construct_type(
type_=AddVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new PVC voice with metadata but no samples
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : str
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/raw_client.py | MIT |
def update(
self,
voice_id: str,
*,
name: typing.Optional[str] = OMIT,
language: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[AddVoiceResponseModel]:
"""
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/voices/pvc/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"language": language,
"description": description,
"labels": labels,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceResponseModel,
construct_type(
type_=AddVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[AddVoiceResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/raw_client.py | MIT |
def train(
self,
voice_id: str,
*,
model_id: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> HttpResponse[StartPvcVoiceTrainingResponseModel]:
"""
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[StartPvcVoiceTrainingResponseModel]
Successful Response
"""
_response = self._client_wrapper.httpx_client.request(
f"v1/voices/pvc/{jsonable_encoder(voice_id)}/train",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"model_id": model_id,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
StartPvcVoiceTrainingResponseModel,
construct_type(
type_=StartPvcVoiceTrainingResponseModel, # type: ignore
object_=_response.json(),
),
)
return HttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
HttpResponse[StartPvcVoiceTrainingResponseModel]
Successful Response
| train | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/raw_client.py | MIT |
async def create(
self,
*,
name: str,
language: str,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddVoiceResponseModel]:
"""
Creates a new PVC voice with metadata but no samples
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : str
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
"v1/voices/pvc",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"language": language,
"description": description,
"labels": labels,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceResponseModel,
construct_type(
type_=AddVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Creates a new PVC voice with metadata but no samples
Parameters
----------
name : str
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : str
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceResponseModel]
Successful Response
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/raw_client.py | MIT |
async def update(
self,
voice_id: str,
*,
name: typing.Optional[str] = OMIT,
language: typing.Optional[str] = OMIT,
description: typing.Optional[str] = OMIT,
labels: typing.Optional[typing.Dict[str, typing.Optional[str]]] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[AddVoiceResponseModel]:
"""
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/voices/pvc/{jsonable_encoder(voice_id)}",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"name": name,
"language": language,
"description": description,
"labels": labels,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
AddVoiceResponseModel,
construct_type(
type_=AddVoiceResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Edit PVC voice metadata
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
name : typing.Optional[str]
The name that identifies this voice. This will be displayed in the dropdown of the website.
language : typing.Optional[str]
Language used in the samples.
description : typing.Optional[str]
Description to use for the created voice.
labels : typing.Optional[typing.Dict[str, typing.Optional[str]]]
Serialized labels dictionary for the voice.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[AddVoiceResponseModel]
Successful Response
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/raw_client.py | MIT |
async def train(
self,
voice_id: str,
*,
model_id: typing.Optional[str] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AsyncHttpResponse[StartPvcVoiceTrainingResponseModel]:
"""
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[StartPvcVoiceTrainingResponseModel]
Successful Response
"""
_response = await self._client_wrapper.httpx_client.request(
f"v1/voices/pvc/{jsonable_encoder(voice_id)}/train",
base_url=self._client_wrapper.get_environment().base,
method="POST",
json={
"model_id": model_id,
},
headers={
"content-type": "application/json",
},
request_options=request_options,
omit=OMIT,
)
try:
if 200 <= _response.status_code < 300:
_data = typing.cast(
StartPvcVoiceTrainingResponseModel,
construct_type(
type_=StartPvcVoiceTrainingResponseModel, # type: ignore
object_=_response.json(),
),
)
return AsyncHttpResponse(response=_response, data=_data)
if _response.status_code == 422:
raise UnprocessableEntityError(
headers=dict(_response.headers),
body=typing.cast(
HttpValidationError,
construct_type(
type_=HttpValidationError, # type: ignore
object_=_response.json(),
),
),
)
_response_json = _response.json()
except JSONDecodeError:
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response.text)
raise ApiError(status_code=_response.status_code, headers=dict(_response.headers), body=_response_json) |
Start PVC training process for a voice.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
model_id : typing.Optional[str]
The model ID to use for the conversion.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AsyncHttpResponse[StartPvcVoiceTrainingResponseModel]
Successful Response
| train | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/raw_client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/raw_client.py | MIT |
def create(
self,
voice_id: str,
*,
files: typing.List[core.File],
remove_background_noise: typing.Optional[bool] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> typing.List[VoiceSample]:
"""
Add audio samples to a PVC voice
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
typing.List[VoiceSample]
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.samples.create(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
"""
_response = self._raw_client.create(
voice_id, files=files, remove_background_noise=remove_background_noise, request_options=request_options
)
return _response.data |
Add audio samples to a PVC voice
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
files : typing.List[core.File]
See core.File for more documentation
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
typing.List[VoiceSample]
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.samples.create(
voice_id="21m00Tcm4TlvDq8ikWAM",
)
| create | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/samples/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/samples/client.py | MIT |
def update(
self,
voice_id: str,
sample_id: str,
*,
remove_background_noise: typing.Optional[bool] = OMIT,
selected_speaker_ids: typing.Optional[typing.Sequence[str]] = OMIT,
trim_start_time: typing.Optional[int] = OMIT,
trim_end_time: typing.Optional[int] = OMIT,
request_options: typing.Optional[RequestOptions] = None,
) -> AddVoiceResponseModel:
"""
Update a PVC voice sample - apply noise removal, or select speaker.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
sample_id : str
Sample ID to be used
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
selected_speaker_ids : typing.Optional[typing.Sequence[str]]
Speaker IDs to be used for PVC training. Make sure you send all the speaker IDs you want to use for PVC training in one request because the last request will override the previous ones.
trim_start_time : typing.Optional[int]
The start time of the audio to be used for PVC training. Time should be in milliseconds
trim_end_time : typing.Optional[int]
The end time of the audio to be used for PVC training. Time should be in milliseconds
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.samples.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
sample_id="VW7YKqPnjY4h39yTbx2L",
)
"""
_response = self._raw_client.update(
voice_id,
sample_id,
remove_background_noise=remove_background_noise,
selected_speaker_ids=selected_speaker_ids,
trim_start_time=trim_start_time,
trim_end_time=trim_end_time,
request_options=request_options,
)
return _response.data |
Update a PVC voice sample - apply noise removal, or select speaker.
Parameters
----------
voice_id : str
Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.
sample_id : str
Sample ID to be used
remove_background_noise : typing.Optional[bool]
If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.
selected_speaker_ids : typing.Optional[typing.Sequence[str]]
Speaker IDs to be used for PVC training. Make sure you send all the speaker IDs you want to use for PVC training in one request because the last request will override the previous ones.
trim_start_time : typing.Optional[int]
The start time of the audio to be used for PVC training. Time should be in milliseconds
trim_end_time : typing.Optional[int]
The end time of the audio to be used for PVC training. Time should be in milliseconds
request_options : typing.Optional[RequestOptions]
Request-specific configuration.
Returns
-------
AddVoiceResponseModel
Successful Response
Examples
--------
from elevenlabs import ElevenLabs
client = ElevenLabs(
api_key="YOUR_API_KEY",
)
client.voices.pvc.samples.update(
voice_id="21m00Tcm4TlvDq8ikWAM",
sample_id="VW7YKqPnjY4h39yTbx2L",
)
| update | python | elevenlabs/elevenlabs-python | src/elevenlabs/voices/pvc/samples/client.py | https://github.com/elevenlabs/elevenlabs-python/blob/master/src/elevenlabs/voices/pvc/samples/client.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.