id
stringlengths 1
5
| contents
stringlengths 354
1.98k
|
|---|---|
12000
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Read on for answers to all our questions from:
Rick Grinnell, founder and managing partner, Glasswing Ventures
There are several layers to the emerging LLM stack, including models, pre-training solutions and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
In our proprietary view of the GenAI tech stack, we categorize the landscape into four distinct layers: foundation model providers, middle-tier companies, end-market or top-layer applications, and full stack or end-to-end vertical companies.
We think that most of the opportunity lies in the application layer, and within that layer, we believe that in the near future, the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models. These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace.
|
12001
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
In our proprietary view of the GenAI tech stack, we categorize the landscape into four distinct layers: foundation model providers, middle-tier companies, end-market or top-layer applications, and full stack or end-to-end vertical companies.
We think that most of the opportunity lies in the application layer, and within that layer, we believe that in the near future, the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models. These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
|
12002
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
In our proprietary view of the GenAI tech stack, we categorize the landscape into four distinct layers: foundation model providers, middle-tier companies, end-market or top-layer applications, and full stack or end-to-end vertical companies.
We think that most of the opportunity lies in the application layer, and within that layer, we believe that in the near future, the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models. These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer.
|
12003
|
6 VCs explain how startups can capture and defend marketshare in the AI era
In our proprietary view of the GenAI tech stack, we categorize the landscape into four distinct layers: foundation model providers, middle-tier companies, end-market or top-layer applications, and full stack or end-to-end vertical companies.
We think that most of the opportunity lies in the application layer, and within that layer, we believe that in the near future, the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models. These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration.
|
12004
|
6 VCs explain how startups can capture and defend marketshare in the AI era
We think that most of the opportunity lies in the application layer, and within that layer, we believe that in the near future, the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models. These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks.
|
12005
|
6 VCs explain how startups can capture and defend marketshare in the AI era
We think that most of the opportunity lies in the application layer, and within that layer, we believe that in the near future, the best applications will harness their in-house expertise to build specialized middle-layer tooling and blend them with the appropriate foundational models. These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge.
|
12006
|
6 VCs explain how startups can capture and defend marketshare in the AI era
These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge. Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition.
|
12007
|
6 VCs explain how startups can capture and defend marketshare in the AI era
These are “vertically integrated” or “full-stack” applications. For startups, this approach means a shorter time-to-market. Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge. Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.
|
12008
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Without negotiating or integrating with external entities, startups can innovate, iterate and deploy solutions at an accelerated pace. This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge. Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool.
|
12009
|
6 VCs explain how startups can capture and defend marketshare in the AI era
This speed and agility can often be the differentiating factor in capturing market share or meeting a critical market need before competitors.
On the other hand, we view the middle layer as a conduit, connecting the foundational aspects of AI with the refined specialized application layer. This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge. Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
|
12010
|
6 VCs explain how startups can capture and defend marketshare in the AI era
This part of the stack includes cutting-edge capabilities, encompassing model fine-tuning, prompt engineering and agile model orchestration. It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge. Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
LLM observability falls within the “middle layer” category, acting as a catalyst for specialized business applications to use foundational models.
|
12011
|
6 VCs explain how startups can capture and defend marketshare in the AI era
It’s here that we anticipate the rise of entities akin to Databricks. Yet, the competitive dynamics of this layer present a unique challenge. Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
LLM observability falls within the “middle layer” category, acting as a catalyst for specialized business applications to use foundational models. Incumbents like Datadog, New Relic and Splunk have all produced LLM observability tools and do appear to be putting a lot of R&D dollars behind this, which may curtail the market area in the short-term.
|
12012
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Primarily, the emergence of foundation model providers expanding into middle-layer tools heightens commoditization risks. Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
LLM observability falls within the “middle layer” category, acting as a catalyst for specialized business applications to use foundational models. Incumbents like Datadog, New Relic and Splunk have all produced LLM observability tools and do appear to be putting a lot of R&D dollars behind this, which may curtail the market area in the short-term.
However, as we have seen before with the inceptions of the internet and cloud computing, incumbents tend to innovate until innovation becomes stagnant.
|
12013
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Additionally, established market leaders venturing into this space further intensify the competition. Consequently, despite a surge in startups within this domain, clear winners still need to be discovered.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
LLM observability falls within the “middle layer” category, acting as a catalyst for specialized business applications to use foundational models. Incumbents like Datadog, New Relic and Splunk have all produced LLM observability tools and do appear to be putting a lot of R&D dollars behind this, which may curtail the market area in the short-term.
However, as we have seen before with the inceptions of the internet and cloud computing, incumbents tend to innovate until innovation becomes stagnant. With AI becoming a household name that finds use cases in every vertical, startups have the chance to come in with innovative solutions that disrupt and reimagine the work of incumbents.
|
12014
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
LLM observability falls within the “middle layer” category, acting as a catalyst for specialized business applications to use foundational models. Incumbents like Datadog, New Relic and Splunk have all produced LLM observability tools and do appear to be putting a lot of R&D dollars behind this, which may curtail the market area in the short-term.
However, as we have seen before with the inceptions of the internet and cloud computing, incumbents tend to innovate until innovation becomes stagnant. With AI becoming a household name that finds use cases in every vertical, startups have the chance to come in with innovative solutions that disrupt and reimagine the work of incumbents. It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks.
|
12015
|
6 VCs explain how startups can capture and defend marketshare in the AI era
LLM observability falls within the “middle layer” category, acting as a catalyst for specialized business applications to use foundational models. Incumbents like Datadog, New Relic and Splunk have all produced LLM observability tools and do appear to be putting a lot of R&D dollars behind this, which may curtail the market area in the short-term.
However, as we have seen before with the inceptions of the internet and cloud computing, incumbents tend to innovate until innovation becomes stagnant. With AI becoming a household name that finds use cases in every vertical, startups have the chance to come in with innovative solutions that disrupt and reimagine the work of incumbents. It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks. Therein lie major opportunities for startups.
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
|
12016
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Incumbents like Datadog, New Relic and Splunk have all produced LLM observability tools and do appear to be putting a lot of R&D dollars behind this, which may curtail the market area in the short-term.
However, as we have seen before with the inceptions of the internet and cloud computing, incumbents tend to innovate until innovation becomes stagnant. With AI becoming a household name that finds use cases in every vertical, startups have the chance to come in with innovative solutions that disrupt and reimagine the work of incumbents. It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks. Therein lie major opportunities for startups.
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources.
|
12017
|
6 VCs explain how startups can capture and defend marketshare in the AI era
However, as we have seen before with the inceptions of the internet and cloud computing, incumbents tend to innovate until innovation becomes stagnant. With AI becoming a household name that finds use cases in every vertical, startups have the chance to come in with innovative solutions that disrupt and reimagine the work of incumbents. It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks. Therein lie major opportunities for startups.
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
|
12018
|
6 VCs explain how startups can capture and defend marketshare in the AI era
With AI becoming a household name that finds use cases in every vertical, startups have the chance to come in with innovative solutions that disrupt and reimagine the work of incumbents. It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks. Therein lie major opportunities for startups.
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself.
|
12019
|
6 VCs explain how startups can capture and defend marketshare in the AI era
It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks. Therein lie major opportunities for startups.
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows.
|
12020
|
6 VCs explain how startups can capture and defend marketshare in the AI era
It’s still too early to say with certainty who the winners will be, as every day reveals new gaps in existing AI frameworks. Therein lie major opportunities for startups.
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
|
12021
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Therein lie major opportunities for startups.
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
However, it’s essential to recognize that the market is still far from being crystallized.
|
12022
|
6 VCs explain how startups can capture and defend marketshare in the AI era
How much room in the market do the largest tech companies’ services leave for smaller companies and startups tooling for LLM deployment?
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
However, it’s essential to recognize that the market is still far from being crystallized. LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement.
|
12023
|
6 VCs explain how startups can capture and defend marketshare in the AI era
When considering the landscape of foundational layer model providers like Alphabet/Google’s Bard, Microsoft/OpenAI’s GPT-4, and Anthropic’s Claude, it’s evident that the more significant players possess inherent advantages regarding data accessibility, talent pool and computational resources. We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
However, it’s essential to recognize that the market is still far from being crystallized. LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement. We need robust tools and platforms to enable broader utilization among businesses and individuals.
|
12024
|
6 VCs explain how startups can capture and defend marketshare in the AI era
We expect this layer to settle into an oligopolistic structure like the cloud provider market, albeit with the addition of a strong open source contingency that will drive considerable third-party adoption.
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
However, it’s essential to recognize that the market is still far from being crystallized. LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement. We need robust tools and platforms to enable broader utilization among businesses and individuals. Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs.
|
12025
|
6 VCs explain how startups can capture and defend marketshare in the AI era
As we look at the generative AI tech stack, the largest market opportunity lies above the model itself. Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
However, it’s essential to recognize that the market is still far from being crystallized. LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement. We need robust tools and platforms to enable broader utilization among businesses and individuals. Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
|
12026
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Companies that introduce AI-powered APIs and operational layers for specific industries will create brand-new use cases and transform workflows. By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
However, it’s essential to recognize that the market is still far from being crystallized. LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement. We need robust tools and platforms to enable broader utilization among businesses and individuals. Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
|
12027
|
6 VCs explain how startups can capture and defend marketshare in the AI era
By embracing this technology to revolutionize workflows, these companies stand to unlock substantial value.
However, it’s essential to recognize that the market is still far from being crystallized. LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement. We need robust tools and platforms to enable broader utilization among businesses and individuals. Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
|
12028
|
6 VCs explain how startups can capture and defend marketshare in the AI era
LLMs are still in their infancy, with adoption at large corporations and startups lacking full maturity and refinement. We need robust tools and platforms to enable broader utilization among businesses and individuals. Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies.
|
12029
|
6 VCs explain how startups can capture and defend marketshare in the AI era
We need robust tools and platforms to enable broader utilization among businesses and individuals. Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives.
|
12030
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Startups have the opportunity here to act quickly, find novel solutions to emerging problems, and define new categories.
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives. Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software.
|
12031
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Interestingly, even large tech companies recognize the gaps in their services and have begun investing heavily in startups alongside VCs. These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives. Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
|
12032
|
6 VCs explain how startups can capture and defend marketshare in the AI era
These companies apply AI to their internal processes and thus see the value startups bring to LLM deployment and integration. Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives. Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
Third, the application is only as sophisticated as the data that it is fed.
|
12033
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Consider the recent investments from Microsoft, Nvidia, and Salesforce into companies like Inflection AI and Cohere.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives. Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product.
|
12034
|
6 VCs explain how startups can capture and defend marketshare in the AI era
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives. Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models.
|
12035
|
6 VCs explain how startups can capture and defend marketshare in the AI era
To ensure industry-specific startups will prove defensible in the rising climate of AI integration, startups must prioritize collecting proprietary data, integrating a sophisticated application layer and assuring output accuracy.
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives. Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models. Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.
|
12036
|
6 VCs explain how startups can capture and defend marketshare in the AI era
We have established a framework to assess the defensibility of application layers of AI companies. First, the application must address a real enterprise pain point prioritized by executives. Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models. Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.
How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?
|
12037
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Second, to provide tangible benefits and long-term differentiation, the application should be composed of cutting-edge models that fit the specific and unique needs of the software. It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models. Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.
How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?
Within the enterprise sector, there’s a clear recognition of the value of AI.
|
12038
|
6 VCs explain how startups can capture and defend marketshare in the AI era
It’s not enough to simply plug into OpenAI; rather, applications should choose their models intentionally while balancing cost, compute, and performance.
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models. Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.
How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?
Within the enterprise sector, there’s a clear recognition of the value of AI. However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients.
|
12039
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models. Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.
How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?
Within the enterprise sector, there’s a clear recognition of the value of AI. However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients. As the business landscape matures, proficiency in leveraging AI is becoming a strategic imperative.
|
12040
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Third, the application is only as sophisticated as the data that it is fed. Proprietary data is necessary for specific and relevant insights and to ensure others cannot replicate the final product. To this end, in-house middle-layer capabilities provide a competitive edge while harnessing the power of foundational models. Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.
How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?
Within the enterprise sector, there’s a clear recognition of the value of AI. However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients. As the business landscape matures, proficiency in leveraging AI is becoming a strategic imperative.
McKinsey reports that generative AI alone can add up to $4.4 trillion in value across industries through writing code, analyzing consumer trends, personalizing customer service, improving operating efficiencies, and more.
|
12041
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Finally, due to the inevitable margin of error of generative AI, the niche market must tolerate imprecision, which is inherently found in subjective and ambiguous content, like sales or marketing.
How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?
Within the enterprise sector, there’s a clear recognition of the value of AI. However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients. As the business landscape matures, proficiency in leveraging AI is becoming a strategic imperative.
McKinsey reports that generative AI alone can add up to $4.4 trillion in value across industries through writing code, analyzing consumer trends, personalizing customer service, improving operating efficiencies, and more. Ninety-four percent of business leaders agree AI will be critical to all businesses’ success over the next five years, and total global spending on AI is expected to reach $154 billion by the end of this year, a 27% increase from 2022.
|
12042
|
6 VCs explain how startups can capture and defend marketshare in the AI era
How much technical competence can startups presume that their future enterprise AI customers will have in-house, and how much does that presumed expertise guide startup product selection and go-to-market motion?
Within the enterprise sector, there’s a clear recognition of the value of AI. However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients. As the business landscape matures, proficiency in leveraging AI is becoming a strategic imperative.
McKinsey reports that generative AI alone can add up to $4.4 trillion in value across industries through writing code, analyzing consumer trends, personalizing customer service, improving operating efficiencies, and more. Ninety-four percent of business leaders agree AI will be critical to all businesses’ success over the next five years, and total global spending on AI is expected to reach $154 billion by the end of this year, a 27% increase from 2022. The next three years are also expected to see a compound annual growth rate of 27% — the annual AI spending in 2026 will be over $300 billion.
|
12043
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Within the enterprise sector, there’s a clear recognition of the value of AI. However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients. As the business landscape matures, proficiency in leveraging AI is becoming a strategic imperative.
McKinsey reports that generative AI alone can add up to $4.4 trillion in value across industries through writing code, analyzing consumer trends, personalizing customer service, improving operating efficiencies, and more. Ninety-four percent of business leaders agree AI will be critical to all businesses’ success over the next five years, and total global spending on AI is expected to reach $154 billion by the end of this year, a 27% increase from 2022. The next three years are also expected to see a compound annual growth rate of 27% — the annual AI spending in 2026 will be over $300 billion. Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing.
|
12044
|
6 VCs explain how startups can capture and defend marketshare in the AI era
However, many lack the internal capabilities to develop AI solutions. This gap presents a significant opportunity for startups specializing in AI to engage with enterprise clients. As the business landscape matures, proficiency in leveraging AI is becoming a strategic imperative.
McKinsey reports that generative AI alone can add up to $4.4 trillion in value across industries through writing code, analyzing consumer trends, personalizing customer service, improving operating efficiencies, and more. Ninety-four percent of business leaders agree AI will be critical to all businesses’ success over the next five years, and total global spending on AI is expected to reach $154 billion by the end of this year, a 27% increase from 2022. The next three years are also expected to see a compound annual growth rate of 27% — the annual AI spending in 2026 will be over $300 billion. Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing. Eighty-two percent of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.
|
12045
|
6 VCs explain how startups can capture and defend marketshare in the AI era
McKinsey reports that generative AI alone can add up to $4.4 trillion in value across industries through writing code, analyzing consumer trends, personalizing customer service, improving operating efficiencies, and more. Ninety-four percent of business leaders agree AI will be critical to all businesses’ success over the next five years, and total global spending on AI is expected to reach $154 billion by the end of this year, a 27% increase from 2022. The next three years are also expected to see a compound annual growth rate of 27% — the annual AI spending in 2026 will be over $300 billion. Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing. Eighty-two percent of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.
Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters.
|
12046
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Ninety-four percent of business leaders agree AI will be critical to all businesses’ success over the next five years, and total global spending on AI is expected to reach $154 billion by the end of this year, a 27% increase from 2022. The next three years are also expected to see a compound annual growth rate of 27% — the annual AI spending in 2026 will be over $300 billion. Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing. Eighty-two percent of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.
Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters. Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question.)
|
12047
|
6 VCs explain how startups can capture and defend marketshare in the AI era
The next three years are also expected to see a compound annual growth rate of 27% — the annual AI spending in 2026 will be over $300 billion. Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing. Eighty-two percent of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.
Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters. Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question.)
The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user.
|
12048
|
6 VCs explain how startups can capture and defend marketshare in the AI era
The next three years are also expected to see a compound annual growth rate of 27% — the annual AI spending in 2026 will be over $300 billion. Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing. Eighty-two percent of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.
Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters. Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question.)
The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user. OpenAI itself racks upward of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.
|
12049
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Despite cloud computing remaining critical, AI budgets are now more than double that of cloud computing. Eighty-two percent of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.
Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters. Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question.)
The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user. OpenAI itself racks upward of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.
Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively.
|
12050
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Eighty-two percent of business leaders believe the integration of AI solutions will increase their employee performance and job satisfaction, and startups should expect a high level of desire for and experience with AI solutions in their future customers.
Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters. Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question.)
The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user. OpenAI itself racks upward of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.
Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively. Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing.
|
12051
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Finally, we’ve seen consumption, or usage-based priced tech products’ growth slow in recent quarters. Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question.)
The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user. OpenAI itself racks upward of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.
Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively. Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing. Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI are likely to gravitate toward such pricing strategies.
|
12052
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Will that fact lead startups building modern AI tools to pursue more traditional SaaS pricing? (The OpenAI pricing schema based on tokens and usage led us to this question.)
The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user. OpenAI itself racks upward of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.
Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively. Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing. Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI are likely to gravitate toward such pricing strategies.
The technology is still nascent, and many companies will likely find success with both kinds of pricing models.
|
12053
|
6 VCs explain how startups can capture and defend marketshare in the AI era
(The OpenAI pricing schema based on tokens and usage led us to this question.)
The trajectory of usage-based pricing has organically aligned with the needs of large language models, given that there is significant variation in prompt/output sizes and resource utilization per user. OpenAI itself racks upward of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.
Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively. Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing. Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI are likely to gravitate toward such pricing strategies.
The technology is still nascent, and many companies will likely find success with both kinds of pricing models. Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises.
|
12054
|
6 VCs explain how startups can capture and defend marketshare in the AI era
OpenAI itself racks upward of $700,000 per day on compute, so to achieve profitability, these operation costs need to be allocated effectively.
Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively. Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing. Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI are likely to gravitate toward such pricing strategies.
The technology is still nascent, and many companies will likely find success with both kinds of pricing models. Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises. However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely.
|
12055
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Nevertheless, we’ve seen the sentiment that tying all costs to volume is generally unpopular with end users, who prefer predictable systems that allow them to budget more effectively. Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing. Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI are likely to gravitate toward such pricing strategies.
The technology is still nascent, and many companies will likely find success with both kinds of pricing models. Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises. However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
|
12056
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Furthermore, it’s important to note that many applications of AI don’t rely on LLMs as a backbone and can provide conventional periodic SaaS pricing. Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI are likely to gravitate toward such pricing strategies.
The technology is still nascent, and many companies will likely find success with both kinds of pricing models. Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises. However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools.
|
12057
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Without direct token calls to the model provider, companies engaged in establishing infrastructural or value-added layers for AI are likely to gravitate toward such pricing strategies.
The technology is still nascent, and many companies will likely find success with both kinds of pricing models. Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises. However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
|
12058
|
6 VCs explain how startups can capture and defend marketshare in the AI era
The technology is still nascent, and many companies will likely find success with both kinds of pricing models. Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises. However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem.
|
12059
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Another possibility as LLM adoption becomes widespread is the adoption of hybrid structures, with tiered periodic payments and usage limits for SMBs and uncapped usage-based tiers tailored to larger enterprises. However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500.
|
12060
|
6 VCs explain how startups can capture and defend marketshare in the AI era
However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U using its own trained dataset to create a more useful credit risk score.
|
12061
|
6 VCs explain how startups can capture and defend marketshare in the AI era
However, as long as large language technology remains heavily dependent on the inflow of data, usage-based pricing will unlikely go away completely. The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U using its own trained dataset to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.
|
12062
|
6 VCs explain how startups can capture and defend marketshare in the AI era
The interdependence between data flow and cost structure will maintain the relevance of usage-based pricing in the foreseeable future.
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U using its own trained dataset to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool.
|
12063
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Lisa Calhoun, founding managing partner, Valor VC
There are several layers to the emerging LLM stack, including models, pre-training solutions, and fine-tuning tools. Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U using its own trained dataset to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
|
12064
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Do you expect startups to build striated solutions for individual layers of the LLM stack, or pursue a more vertical approach?
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U using its own trained dataset to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
Tools like Datadog can only help the acceptance of AI tools if they succeed in monitoring AI performance bottlenecks.
|
12065
|
6 VCs explain how startups can capture and defend marketshare in the AI era
While there are startups specializing in parts of the stack (like Pinecone), Valor’s focus is on applied AI, which we define as AI that is solving a customer problem. Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U using its own trained dataset to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
Tools like Datadog can only help the acceptance of AI tools if they succeed in monitoring AI performance bottlenecks. That in and of itself is probably still largely unexplored territory that will see a lot of change and maturing in the next few years.
|
12066
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Saile.ai is a good example — it uses AI to generate closeable leads for the Fortune 500. Or Funding U using its own trained dataset to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
Tools like Datadog can only help the acceptance of AI tools if they succeed in monitoring AI performance bottlenecks. That in and of itself is probably still largely unexplored territory that will see a lot of change and maturing in the next few years. One key aspect there might be cost monitoring as well since companies like OpenAI charge largely “by the token,” which is a very different metric than most cloud computing.
|
12067
|
6 VCs explain how startups can capture and defend marketshare in the AI era
Or Funding U using its own trained dataset to create a more useful credit risk score. Or Allelica, using AI on treatment solutions applied to individual DNA to find the best medical treatment for you personally in a given situation.
Companies like Datadog are building products to support the expanding AI market, including releasing an LLM observability tool. Will efforts like what Datadog has built (and similar output from large/incumbent tech powers) curtail the market area where startups can build and compete?
Tools like Datadog can only help the acceptance of AI tools if they succeed in monitoring AI performance bottlenecks. That in and of itself is probably still largely unexplored territory that will see a lot of change and maturing in the next few years. One key aspect there might be cost monitoring as well since companies like OpenAI charge largely “by the token,” which is a very different metric than most cloud computing.
What can be done to ensure industry-specific startups that tune generative AI models for a specific niche will prove defensible?
|
12068
|
Is Sam Bankman-Fried’s defense even trying to win?
I have never seen Sam Bankman-Fried so still as he was during the prosecution’s opening statement. The characteristic leg-jiggling was absent. He barely moved as the prosecutor listed the evidence against him: internal company files, what customers were told, the testimony of his co-conspirators and his own words.
His hair was shorn, the result of a haircut from a fellow prisoner, the Wall Street Journal reported. He wore a suit bought at a discount at Macy’s, per the Journal; it hung on him. He appeared to have lost some weight.
“All of that was built on lies.”
Bankman-Fried, at this time last year, had a luxury lifestyle as the CEO of crypto exchange FTX, said the assistant US attorney, Thane Rehn, in the cadence of a high schooler delivering his lines in a student play. Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians.
|
12069
|
Is Sam Bankman-Fried’s defense even trying to win?
The characteristic leg-jiggling was absent. He barely moved as the prosecutor listed the evidence against him: internal company files, what customers were told, the testimony of his co-conspirators and his own words.
His hair was shorn, the result of a haircut from a fellow prisoner, the Wall Street Journal reported. He wore a suit bought at a discount at Macy’s, per the Journal; it hung on him. He appeared to have lost some weight.
“All of that was built on lies.”
Bankman-Fried, at this time last year, had a luxury lifestyle as the CEO of crypto exchange FTX, said the assistant US attorney, Thane Rehn, in the cadence of a high schooler delivering his lines in a student play. Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
|
12070
|
Is Sam Bankman-Fried’s defense even trying to win?
He barely moved as the prosecutor listed the evidence against him: internal company files, what customers were told, the testimony of his co-conspirators and his own words.
His hair was shorn, the result of a haircut from a fellow prisoner, the Wall Street Journal reported. He wore a suit bought at a discount at Macy’s, per the Journal; it hung on him. He appeared to have lost some weight.
“All of that was built on lies.”
Bankman-Fried, at this time last year, had a luxury lifestyle as the CEO of crypto exchange FTX, said the assistant US attorney, Thane Rehn, in the cadence of a high schooler delivering his lines in a student play. Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
In his opening statement, Rehn dodged explaining cryptocurrency to the jury.
|
12071
|
Is Sam Bankman-Fried’s defense even trying to win?
His hair was shorn, the result of a haircut from a fellow prisoner, the Wall Street Journal reported. He wore a suit bought at a discount at Macy’s, per the Journal; it hung on him. He appeared to have lost some weight.
“All of that was built on lies.”
Bankman-Fried, at this time last year, had a luxury lifestyle as the CEO of crypto exchange FTX, said the assistant US attorney, Thane Rehn, in the cadence of a high schooler delivering his lines in a student play. Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
In his opening statement, Rehn dodged explaining cryptocurrency to the jury. Instead, he punched hard on Bankman-Fried lying and stealing.
|
12072
|
Is Sam Bankman-Fried’s defense even trying to win?
His hair was shorn, the result of a haircut from a fellow prisoner, the Wall Street Journal reported. He wore a suit bought at a discount at Macy’s, per the Journal; it hung on him. He appeared to have lost some weight.
“All of that was built on lies.”
Bankman-Fried, at this time last year, had a luxury lifestyle as the CEO of crypto exchange FTX, said the assistant US attorney, Thane Rehn, in the cadence of a high schooler delivering his lines in a student play. Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
In his opening statement, Rehn dodged explaining cryptocurrency to the jury. Instead, he punched hard on Bankman-Fried lying and stealing.
Bankman-Fried sat almost motionless, occasionally glancing at Rehn, as the prosecutor told the jury that Bankman-Fried sold stock in FTX and borrowed millions from lenders by lying.
|
12073
|
Is Sam Bankman-Fried’s defense even trying to win?
He appeared to have lost some weight.
“All of that was built on lies.”
Bankman-Fried, at this time last year, had a luxury lifestyle as the CEO of crypto exchange FTX, said the assistant US attorney, Thane Rehn, in the cadence of a high schooler delivering his lines in a student play. Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
In his opening statement, Rehn dodged explaining cryptocurrency to the jury. Instead, he punched hard on Bankman-Fried lying and stealing.
Bankman-Fried sat almost motionless, occasionally glancing at Rehn, as the prosecutor told the jury that Bankman-Fried sold stock in FTX and borrowed millions from lenders by lying.
The story Rehn told is familiar to anyone following the news.
|
12074
|
Is Sam Bankman-Fried’s defense even trying to win?
He appeared to have lost some weight.
“All of that was built on lies.”
Bankman-Fried, at this time last year, had a luxury lifestyle as the CEO of crypto exchange FTX, said the assistant US attorney, Thane Rehn, in the cadence of a high schooler delivering his lines in a student play. Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
In his opening statement, Rehn dodged explaining cryptocurrency to the jury. Instead, he punched hard on Bankman-Fried lying and stealing.
Bankman-Fried sat almost motionless, occasionally glancing at Rehn, as the prosecutor told the jury that Bankman-Fried sold stock in FTX and borrowed millions from lenders by lying.
The story Rehn told is familiar to anyone following the news. In May and June of 2022, Alameda Research — the crypto trading company ostensibly helmed by Caroline Ellison — didn’t have enough to pay its bills, so it pulled customer money to repay loans.
|
12075
|
Is Sam Bankman-Fried’s defense even trying to win?
Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
In his opening statement, Rehn dodged explaining cryptocurrency to the jury. Instead, he punched hard on Bankman-Fried lying and stealing.
Bankman-Fried sat almost motionless, occasionally glancing at Rehn, as the prosecutor told the jury that Bankman-Fried sold stock in FTX and borrowed millions from lenders by lying.
The story Rehn told is familiar to anyone following the news. In May and June of 2022, Alameda Research — the crypto trading company ostensibly helmed by Caroline Ellison — didn’t have enough to pay its bills, so it pulled customer money to repay loans. By September, the hole in the FTX balance sheet was so big that customers could never be repaid.
|
12076
|
Is Sam Bankman-Fried’s defense even trying to win?
Bankman-Fried hung out with Tom Brady. He was on magazine covers, lived in a $30 million penthouse, and spent time with world politicians. “All of that was built on lies,” Rehn said.
In his opening statement, Rehn dodged explaining cryptocurrency to the jury. Instead, he punched hard on Bankman-Fried lying and stealing.
Bankman-Fried sat almost motionless, occasionally glancing at Rehn, as the prosecutor told the jury that Bankman-Fried sold stock in FTX and borrowed millions from lenders by lying.
The story Rehn told is familiar to anyone following the news. In May and June of 2022, Alameda Research — the crypto trading company ostensibly helmed by Caroline Ellison — didn’t have enough to pay its bills, so it pulled customer money to repay loans. By September, the hole in the FTX balance sheet was so big that customers could never be repaid.
FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
When CoinDesk published its article in November 2022, people realized FTX was a house of cards, Rehn said.
|
12077
|
Is Sam Bankman-Fried’s defense even trying to win?
In his opening statement, Rehn dodged explaining cryptocurrency to the jury. Instead, he punched hard on Bankman-Fried lying and stealing.
Bankman-Fried sat almost motionless, occasionally glancing at Rehn, as the prosecutor told the jury that Bankman-Fried sold stock in FTX and borrowed millions from lenders by lying.
The story Rehn told is familiar to anyone following the news. In May and June of 2022, Alameda Research — the crypto trading company ostensibly helmed by Caroline Ellison — didn’t have enough to pay its bills, so it pulled customer money to repay loans. By September, the hole in the FTX balance sheet was so big that customers could never be repaid.
FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
When CoinDesk published its article in November 2022, people realized FTX was a house of cards, Rehn said. Meanwhile, Bankman-Fried tweeted. “FTX is fine.
|
12078
|
Is Sam Bankman-Fried’s defense even trying to win?
Instead, he punched hard on Bankman-Fried lying and stealing.
Bankman-Fried sat almost motionless, occasionally glancing at Rehn, as the prosecutor told the jury that Bankman-Fried sold stock in FTX and borrowed millions from lenders by lying.
The story Rehn told is familiar to anyone following the news. In May and June of 2022, Alameda Research — the crypto trading company ostensibly helmed by Caroline Ellison — didn’t have enough to pay its bills, so it pulled customer money to repay loans. By September, the hole in the FTX balance sheet was so big that customers could never be repaid.
FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
When CoinDesk published its article in November 2022, people realized FTX was a house of cards, Rehn said. Meanwhile, Bankman-Fried tweeted. “FTX is fine. Assets are fine” and “We don’t invest customer assets even in treasuries.”
Pointing at Bankman-Fried, Rehn said, “This man stole billions of dollars from thousands of people.”
So how was the defense going to follow it up?
|
12079
|
Is Sam Bankman-Fried’s defense even trying to win?
The story Rehn told is familiar to anyone following the news. In May and June of 2022, Alameda Research — the crypto trading company ostensibly helmed by Caroline Ellison — didn’t have enough to pay its bills, so it pulled customer money to repay loans. By September, the hole in the FTX balance sheet was so big that customers could never be repaid.
FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
When CoinDesk published its article in November 2022, people realized FTX was a house of cards, Rehn said. Meanwhile, Bankman-Fried tweeted. “FTX is fine. Assets are fine” and “We don’t invest customer assets even in treasuries.”
Pointing at Bankman-Fried, Rehn said, “This man stole billions of dollars from thousands of people.”
So how was the defense going to follow it up? I was very curious, having learned yesterday that Bankman-Fried had never been offered a plea deal since he and his attorneys had told the government they wouldn’t negotiate.
|
12080
|
Is Sam Bankman-Fried’s defense even trying to win?
By September, the hole in the FTX balance sheet was so big that customers could never be repaid.
FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
When CoinDesk published its article in November 2022, people realized FTX was a house of cards, Rehn said. Meanwhile, Bankman-Fried tweeted. “FTX is fine. Assets are fine” and “We don’t invest customer assets even in treasuries.”
Pointing at Bankman-Fried, Rehn said, “This man stole billions of dollars from thousands of people.”
So how was the defense going to follow it up? I was very curious, having learned yesterday that Bankman-Fried had never been offered a plea deal since he and his attorneys had told the government they wouldn’t negotiate. Surely there would be some manner of evidence, some something, that would have made him so confident.
There was, instead, a metaphor.
|
12081
|
Is Sam Bankman-Fried’s defense even trying to win?
FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
When CoinDesk published its article in November 2022, people realized FTX was a house of cards, Rehn said. Meanwhile, Bankman-Fried tweeted. “FTX is fine. Assets are fine” and “We don’t invest customer assets even in treasuries.”
Pointing at Bankman-Fried, Rehn said, “This man stole billions of dollars from thousands of people.”
So how was the defense going to follow it up? I was very curious, having learned yesterday that Bankman-Fried had never been offered a plea deal since he and his attorneys had told the government they wouldn’t negotiate. Surely there would be some manner of evidence, some something, that would have made him so confident.
There was, instead, a metaphor.
Defense attorney Mark Cohen, with the energy of a patient father telling his obnoxious children a bedtime story, assured us that working at a startup was like building a plane while flying it, and that FTX the plane had flown right into the perfect storm: the crypto crash.
|
12082
|
Is Sam Bankman-Fried’s defense even trying to win?
Meanwhile, Bankman-Fried tweeted. “FTX is fine. Assets are fine” and “We don’t invest customer assets even in treasuries.”
Pointing at Bankman-Fried, Rehn said, “This man stole billions of dollars from thousands of people.”
So how was the defense going to follow it up? I was very curious, having learned yesterday that Bankman-Fried had never been offered a plea deal since he and his attorneys had told the government they wouldn’t negotiate. Surely there would be some manner of evidence, some something, that would have made him so confident.
There was, instead, a metaphor.
Defense attorney Mark Cohen, with the energy of a patient father telling his obnoxious children a bedtime story, assured us that working at a startup was like building a plane while flying it, and that FTX the plane had flown right into the perfect storm: the crypto crash. Except, uh, he also said this: FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
I couldn’t stop thinking about the missing risk officer
The problem with this metaphor is that if FTX was a plane, it was a plane flying with a key component missing — namely, the risk officer, an executive whose job it is to, well, manage risk.
|
12083
|
Is Sam Bankman-Fried’s defense even trying to win?
Surely there would be some manner of evidence, some something, that would have made him so confident.
There was, instead, a metaphor.
Defense attorney Mark Cohen, with the energy of a patient father telling his obnoxious children a bedtime story, assured us that working at a startup was like building a plane while flying it, and that FTX the plane had flown right into the perfect storm: the crypto crash. Except, uh, he also said this: FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
I couldn’t stop thinking about the missing risk officer
The problem with this metaphor is that if FTX was a plane, it was a plane flying with a key component missing — namely, the risk officer, an executive whose job it is to, well, manage risk. This is sort of an important thing, as risks can be anything from reputational to regulatory to financial.
|
12084
|
Is Sam Bankman-Fried’s defense even trying to win?
Surely there would be some manner of evidence, some something, that would have made him so confident.
There was, instead, a metaphor.
Defense attorney Mark Cohen, with the energy of a patient father telling his obnoxious children a bedtime story, assured us that working at a startup was like building a plane while flying it, and that FTX the plane had flown right into the perfect storm: the crypto crash. Except, uh, he also said this: FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
I couldn’t stop thinking about the missing risk officer
The problem with this metaphor is that if FTX was a plane, it was a plane flying with a key component missing — namely, the risk officer, an executive whose job it is to, well, manage risk. This is sort of an important thing, as risks can be anything from reputational to regulatory to financial.
FTX was named such as it was because it was a futures exchange, which, to borrow a phrase from Bloomberg’s Matt Levine, “sits between the winners and losers of bets.” That means FTX can’t pay out what it owes the winners unless the losers pay up.
|
12085
|
Is Sam Bankman-Fried’s defense even trying to win?
Except, uh, he also said this: FTX “didn’t have a chief risk officer, which became an issue when the storm hit.”
I couldn’t stop thinking about the missing risk officer
The problem with this metaphor is that if FTX was a plane, it was a plane flying with a key component missing — namely, the risk officer, an executive whose job it is to, well, manage risk. This is sort of an important thing, as risks can be anything from reputational to regulatory to financial.
FTX was named such as it was because it was a futures exchange, which, to borrow a phrase from Bloomberg’s Matt Levine, “sits between the winners and losers of bets.” That means FTX can’t pay out what it owes the winners unless the losers pay up. Risk management is a crucial part of the business; risk officers exist to identify business’ potential risks, monitor, and mitigate them. This is to say nothing of the regulatory risks around crypto.
|
12086
|
Is Sam Bankman-Fried’s defense even trying to win?
This is sort of an important thing, as risks can be anything from reputational to regulatory to financial.
FTX was named such as it was because it was a futures exchange, which, to borrow a phrase from Bloomberg’s Matt Levine, “sits between the winners and losers of bets.” That means FTX can’t pay out what it owes the winners unless the losers pay up. Risk management is a crucial part of the business; risk officers exist to identify business’ potential risks, monitor, and mitigate them. This is to say nothing of the regulatory risks around crypto.
As Cohen droned on about airplanes, I couldn’t stop thinking about the missing risk officer. Bringing it up, I thought, was a tremendous mistake. The prosecution hadn’t mentioned it. Either Bankman-Fried is stupid — unlikely — or he deliberately didn’t hire a risk officer. Was he worried about what one might find?
Sure, as Cohen put it, Bankman-Fried was a math nerd who didn’t party.
|
12087
|
Is Sam Bankman-Fried’s defense even trying to win?
FTX was named such as it was because it was a futures exchange, which, to borrow a phrase from Bloomberg’s Matt Levine, “sits between the winners and losers of bets.” That means FTX can’t pay out what it owes the winners unless the losers pay up. Risk management is a crucial part of the business; risk officers exist to identify business’ potential risks, monitor, and mitigate them. This is to say nothing of the regulatory risks around crypto.
As Cohen droned on about airplanes, I couldn’t stop thinking about the missing risk officer. Bringing it up, I thought, was a tremendous mistake. The prosecution hadn’t mentioned it. Either Bankman-Fried is stupid — unlikely — or he deliberately didn’t hire a risk officer. Was he worried about what one might find?
Sure, as Cohen put it, Bankman-Fried was a math nerd who didn’t party. That paints a picture of someone who’s pretty deliberate, particularly since he immediately left MIT and went to work on Wall Street.
|
12088
|
Is Sam Bankman-Fried’s defense even trying to win?
Risk management is a crucial part of the business; risk officers exist to identify business’ potential risks, monitor, and mitigate them. This is to say nothing of the regulatory risks around crypto.
As Cohen droned on about airplanes, I couldn’t stop thinking about the missing risk officer. Bringing it up, I thought, was a tremendous mistake. The prosecution hadn’t mentioned it. Either Bankman-Fried is stupid — unlikely — or he deliberately didn’t hire a risk officer. Was he worried about what one might find?
Sure, as Cohen put it, Bankman-Fried was a math nerd who didn’t party. That paints a picture of someone who’s pretty deliberate, particularly since he immediately left MIT and went to work on Wall Street. If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
|
12089
|
Is Sam Bankman-Fried’s defense even trying to win?
This is to say nothing of the regulatory risks around crypto.
As Cohen droned on about airplanes, I couldn’t stop thinking about the missing risk officer. Bringing it up, I thought, was a tremendous mistake. The prosecution hadn’t mentioned it. Either Bankman-Fried is stupid — unlikely — or he deliberately didn’t hire a risk officer. Was he worried about what one might find?
Sure, as Cohen put it, Bankman-Fried was a math nerd who didn’t party. That paints a picture of someone who’s pretty deliberate, particularly since he immediately left MIT and went to work on Wall Street. If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs.
|
12090
|
Is Sam Bankman-Fried’s defense even trying to win?
Bringing it up, I thought, was a tremendous mistake. The prosecution hadn’t mentioned it. Either Bankman-Fried is stupid — unlikely — or he deliberately didn’t hire a risk officer. Was he worried about what one might find?
Sure, as Cohen put it, Bankman-Fried was a math nerd who didn’t party. That paints a picture of someone who’s pretty deliberate, particularly since he immediately left MIT and went to work on Wall Street. If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
|
12091
|
Is Sam Bankman-Fried’s defense even trying to win?
The prosecution hadn’t mentioned it. Either Bankman-Fried is stupid — unlikely — or he deliberately didn’t hire a risk officer. Was he worried about what one might find?
Sure, as Cohen put it, Bankman-Fried was a math nerd who didn’t party. That paints a picture of someone who’s pretty deliberate, particularly since he immediately left MIT and went to work on Wall Street. If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did.
|
12092
|
Is Sam Bankman-Fried’s defense even trying to win?
Was he worried about what one might find?
Sure, as Cohen put it, Bankman-Fried was a math nerd who didn’t party. That paints a picture of someone who’s pretty deliberate, particularly since he immediately left MIT and went to work on Wall Street. If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious.
|
12093
|
Is Sam Bankman-Fried’s defense even trying to win?
That paints a picture of someone who’s pretty deliberate, particularly since he immediately left MIT and went to work on Wall Street. If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious. That’s why you hire a risk officer and delegate!
|
12094
|
Is Sam Bankman-Fried’s defense even trying to win?
If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious. That’s why you hire a risk officer and delegate! That’s the whole point!
|
12095
|
Is Sam Bankman-Fried’s defense even trying to win?
If he had been a party-hardy trainwreck, I could see overlooking a risk officer in order to do another line, or a supermodel, or something else important. Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious. That’s why you hire a risk officer and delegate! That’s the whole point! I could barely even hear Cohen blaming Caroline Ellison and Changpeng “CZ” Zhao for the debacle over the “no risk officer” ringing in my ears.
|
12096
|
Is Sam Bankman-Fried’s defense even trying to win?
Why was the defense bringing this up?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious. That’s why you hire a risk officer and delegate! That’s the whole point! I could barely even hear Cohen blaming Caroline Ellison and Changpeng “CZ” Zhao for the debacle over the “no risk officer” ringing in my ears.
Following the defense’s opening statements, things got still worse for Bankman-Fried.
|
12097
|
Is Sam Bankman-Fried’s defense even trying to win?
But as Cohen tried to tell me that FTX’s and Alameda’s business relationships were “reasonable under the circumstances,” the lack of risk officer kept elbowing me in the ribs. “Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious. That’s why you hire a risk officer and delegate! That’s the whole point! I could barely even hear Cohen blaming Caroline Ellison and Changpeng “CZ” Zhao for the debacle over the “no risk officer” ringing in my ears.
Following the defense’s opening statements, things got still worse for Bankman-Fried. The prosecution called its first witness, Marc-Antoine Julliard, whose money got stuck on FTX.
|
12098
|
Is Sam Bankman-Fried’s defense even trying to win?
“Sam acted in good faith and took reasonable business measures” is a pretty hard pill to swallow with that in mind.
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious. That’s why you hire a risk officer and delegate! That’s the whole point! I could barely even hear Cohen blaming Caroline Ellison and Changpeng “CZ” Zhao for the debacle over the “no risk officer” ringing in my ears.
Following the defense’s opening statements, things got still worse for Bankman-Fried. The prosecution called its first witness, Marc-Antoine Julliard, whose money got stuck on FTX. Juilliard, who was born in Paris and lives in London, testified that he trusted FTX because Bankman-Fried came across as a leading figure of the industry.
|
12099
|
Is Sam Bankman-Fried’s defense even trying to win?
Man, it’s no good when your defense lawyer has just made you sound worse than the prosecution already did. And while Cohen tried to make the common white-collar defense argument that Bankman-Fried, as CEO, was simply too busy to oversee what everyone did every day, he just made me more suspicious. That’s why you hire a risk officer and delegate! That’s the whole point! I could barely even hear Cohen blaming Caroline Ellison and Changpeng “CZ” Zhao for the debacle over the “no risk officer” ringing in my ears.
Following the defense’s opening statements, things got still worse for Bankman-Fried. The prosecution called its first witness, Marc-Antoine Julliard, whose money got stuck on FTX. Juilliard, who was born in Paris and lives in London, testified that he trusted FTX because Bankman-Fried came across as a leading figure of the industry. When he was evaluating the exchange, he thought the sheer volume of users was important, too — at the time, FTX was among the top three biggest exchanges.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.