chunk stringlengths 11 1k | source stringlengths 37 40 | embeddings list |
|---|---|---|
People
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
# Matthew Fite
People
Visit my LinkedIn profile
Our know-how
## Related posts
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Foundation Models
Corporate
People
Structured Data
Chat GPT
Sustainability
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Infrastructure
Hardware & sensors
MLOps | scraping/output/2221413558556643221.txt | [
0.012435022741556168,
0.0006148203974589705,
-0.05585608258843422,
-0.010411368682980537,
0.050591010600328445,
-0.06610315293073654,
0.06824671477079391,
0.07162326574325562,
0.037720102816820145,
-0.04020276665687561,
0.029392436146736145,
0.020163951441645622,
0.05309386923909187,
-0.05... |
Corporate
People
Structured Data
Chat GPT
Sustainability
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Infrastructure
Hardware & sensors
MLOps
Generative AI
Natural language processing
Computer vision
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/2221413558556643221.txt | [
0.018618103116750717,
-0.007257635239511728,
-0.042830027639865875,
-0.027256492525339127,
0.03357718139886856,
-0.05109598487615585,
0.07161897420883179,
0.08220641314983368,
0.05122102424502373,
-0.03575775772333145,
0.039756160229444504,
0.032535847276449203,
0.03885257616639137,
-0.031... |
MLOps
Natural language processing
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
Blogposts
All Posts
October 17, 2022
# Webinar | Hybrid AI: successfully combining expert knowledge with ML models
Contributors
Matthias Feys
Q / CTO
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received! | scraping/output/7371013274892921836.txt | [
0.01983024738729,
-0.007635905873030424,
-0.05191277712583542,
-0.03564659133553505,
0.05860472097992897,
-0.07493533939123154,
0.08192823827266693,
0.0808773934841156,
0.05540924519300461,
-0.03947046026587486,
0.030806615948677063,
0.03461295738816261,
0.02765682153403759,
-0.05742097645... |
Q / CTO
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post
Machine learning opened up new ways of solving technical challenges by
training models on data instead of directly implementing rules & logic. This
offers a lot of new opportunities for solving difficult problems.
However, sometimes it can also be useful to combine these machine learning
models with (expert) rules, to get the best possible outcome and leverage the
benefits of both expert knowledge as well as machine learning models.
Hybrid AI is the name of this field, and focuses on combining non-symbolic AI
(eg. machine learning), with symbolic AI (eg. expert rules). Our speakers,
Prof. Sofie Van Hoecke (PreDiCT) and Matthias Feys (ML6), will give you an
overview of this field by tackling the following topics: | scraping/output/7371013274892921836.txt | [
0.027594488114118576,
0.006147328298538923,
-0.06109403446316719,
-0.020520372316241264,
0.07085161656141281,
-0.08401776105165482,
0.06104779615998268,
0.04578012600541115,
0.07409181445837021,
-0.042586550116539,
0.04693826287984848,
0.02228025533258915,
0.05558225139975548,
-0.075681567... |
* Why and when Hybrid AI is relevant for your situation.
* An overview of different ways to combine rules with machine learning models.
* Concrete examples where hybrid AI was implemented.
#### Get access to the webinar by filling in the form below.
## Related posts
View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Large Language Model
Foundation Models
Foundation Models
Corporate
Corporate
People
People
Structured Data
Structured Data
Chat GPT
Chat GPT
Sustainability
Sustainability
Voice & Sound
Voice & Sound
Front-End Development
Front-End Development
Data Protection & Security
Data Protection & Security
Responsible/ Ethical AI
Responsible/ Ethical AI
Infrastructure
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision | scraping/output/7371013274892921836.txt | [
0.004907206632196903,
-0.010709302499890327,
-0.035429514944553375,
-0.045000866055488586,
0.06349343061447144,
-0.030712449923157692,
0.07048621028661728,
0.04775281623005867,
0.062155596911907196,
-0.030260251834988594,
0.03439551219344139,
0.008968019858002663,
0.05165783315896988,
-0.0... |
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/7371013274892921836.txt | [
0.009824835695326328,
-0.020959317684173584,
-0.04266725480556488,
-0.04680301249027252,
0.039210353046655655,
-0.059716079384088516,
0.06951974332332611,
0.08077019453048706,
0.053475718945264816,
-0.04345616325736046,
0.025612762197852135,
0.03162657842040062,
0.02648812346160412,
-0.021... |
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
# Structured Data
Keep your knowledge in safe hands
Learn moreContact us
## Driven by data
Many business processes are recorded in tabular datasets like Excel sheets,
relational databases, and time series. Thanks to our Machine Learning
expertise, we turn your structured data into valuable insights that help you
solve a variety of problems.
### Regression & forecasting | scraping/output/-4005684865848025300.txt | [
0.01163467112928629,
-0.015719091519713402,
-0.059470828622579575,
-0.04880327358841896,
0.048264455050230026,
-0.05629795044660568,
0.04857132211327553,
0.09390077739953995,
0.047949668020009995,
-0.04133957624435425,
0.014582604169845581,
0.032603006809949875,
0.03982827812433243,
-0.051... |
### Regression & forecasting
Predicting the future is hard, but with the right tools, we can forecast
trends in e.g. energy consumption or sales volume with precision. We do this
by using external data sources and taking advantage of the latest model
improvements.
### Classification & clustering
Labeling data records adds value to large data sets and automates actions. It
involves creating different labels (clustering) and assigning them to new data
(classification). This technique can be used for detecting machine failures,
sales abandonment, and clustering e-commerce users.
### Anomaly detection
We are experts in detecting anomalies in machine and process behavior. Our
strategy is to separate "abnormal" events from "normal" behavior to gain
insights into causality and explainability and use them to optimize your
systems.
### Operational research & optimization | scraping/output/-4005684865848025300.txt | [
0.013752727769315243,
-0.024882463738322258,
-0.05877939239144325,
-0.03328438475728035,
0.04797260835766792,
-0.0500447079539299,
0.027389567345380783,
0.04589691385626793,
0.04990975186228752,
-0.04949275776743889,
0.021283892914652824,
0.03139276057481766,
0.04571723937988281,
-0.048991... |
### Operational research & optimization
Even if a process works well, it can always be improved. This is where our
expertise comes in. We specialize in tackling complex problems such as
production planning, job scheduling, vehicle routing, box packing and more.
## Client cases
Discover how our expertise in Hardware & Sensors leads businesses to success
Placeholder tag
The AI-Driven NGO : A data-driven approach to creating a better future for
children
By gaining insights into the donor journey, efficiency and effectiveness of
fundraising could be enhanced.
November 19, 2021
By
This is some text inside of a div block.
Public & Professional Services
This is some text inside of a div block.
Placeholder tag
Building a recommendation engine for March Real Estate
Ml6 built a proactive recommendation tool resulting in a 7x boost in the
number of leads from the matching engine.
May 28, 2021
By
This is some text inside of a div block.
Public & Professional Services | scraping/output/-4005684865848025300.txt | [
-0.003398805158212781,
-0.04146917536854744,
-0.04894723370671272,
-0.03383859992027283,
0.06280443072319031,
-0.03362594172358513,
0.01595904678106308,
0.06422998756170273,
0.04206882044672966,
-0.026538847014307976,
0.030271902680397034,
0.031005287542939186,
-0.0053856936283409595,
-0.0... |
May 28, 2021
By
This is some text inside of a div block.
Public & Professional Services
This is some text inside of a div block.
Previous
3 / 3
Large Language Model
Foundation Models
Corporate
People
Structured Data
Chat GPT
Sustainability
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Infrastructure
Hardware & sensors
MLOps
Generative AI
Natural language processing
Computer vision
## Typical challenges
With our expertise, we can help you overcome structured data challenges in AI.
## Aligning the technical problem formulation with business problem | scraping/output/-4005684865848025300.txt | [
0.0316639244556427,
0.015725577250123024,
-0.049680378288030624,
-0.07027611881494522,
0.06502730399370193,
-0.031200341880321503,
0.045817647129297256,
0.0514540895819664,
0.05239785090088844,
-0.012497324496507645,
0.02095072902739048,
0.017896577715873718,
0.06049134582281113,
-0.051382... |
Computer vision
## Typical challenges
With our expertise, we can help you overcome structured data challenges in AI.
## Aligning the technical problem formulation with business problem
Before starting machine learning (ML) model training, you need to understand
the business requirements and available data. This includes deciding whether
the problem should be approached as a regression or classification task, or
whether ranking or recommendation is required. The success of the technical
implementation is ultimately determined by meeting business expectations,
which we always strive to achieve.
## Validation is hard | scraping/output/-4005684865848025300.txt | [
0.03222194314002991,
-0.05023372173309326,
-0.08978928625583649,
-0.0701737180352211,
0.05397160351276398,
-0.05322730913758278,
0.021671604365110397,
0.03968498855829239,
0.0807538703083992,
-0.006288918666541576,
0.030629029497504234,
0.020938189700245857,
0.07195835560560226,
-0.0565325... |
## Validation is hard
Unsupervised learning uses tools like clustering to identify data patterns,
but the results can be difficult to interpret. That’s why domain experts
during development need to make sure that the outcome is accurate. It’s also
tough to identify causal relationships between variables and labels may not be
available in situations like fraud or predicting machine failures. However,
once you overcome these challenges with our help, unsupervised learning is a
powerful tool for uncovering hidden patterns and gaining insights from data.
## Data engineering and change management | scraping/output/-4005684865848025300.txt | [
0.043846163898706436,
-0.007073994260281324,
-0.10509563237428665,
-0.04337156563997269,
0.030788525938987732,
-0.04785606265068054,
0.06232593208551407,
0.037429340183734894,
0.05110590159893036,
-0.010650791227817535,
-0.040084127336740494,
0.06307952851057053,
0.09443418681621552,
-0.03... |
## Data engineering and change management
Building a successful solution for structured data requires a lot of data
engineering and change management effort. Moreover, machine learning system
development can lead to hidden technical problems such as poor data quality,
model complexity and deployment challenges. To create long-term value, it’s
not enough to simply train an ML model. Validation, integration into existing
systems, ongoing monitoring and updating are essential to deliver real value
over time. We help you do just that.
## High level outline of the solution
Data collection
The first step in any structured data solution is data collection. This can be
done from a variety of sources, including internal databases, APIs, and third-
party data providers.
Data cleaning and preprocessing | scraping/output/-4005684865848025300.txt | [
0.045911893248558044,
-0.029341567307710648,
-0.10089673846960068,
-0.03303758427500725,
0.01043628342449665,
-0.034832943230867386,
0.05481291189789772,
0.03713198006153107,
0.04807177931070328,
-0.04293768107891083,
0.0004139833617955446,
0.04545815289020538,
0.07051260024309158,
-0.0294... |
Data cleaning and preprocessing
Once data has been collected, it must be cleaned and preprocessed to make sure
that it is of high quality. This includes tasks like removing missing values,
handling outliers, and converting data types. Exploratory Data Science (EDA)
is essential.
Feature engineering
Feature engineering involves selecting and transforming the features in the
data that are most relevant to the problem at hand. This can include creating
new features, selecting important features, and scaling or normalizing
features.
Model training and selection
A machine learning model can be trained after data has been preprocessed and
features have been engineered. The selection of a specific model is based on
how well it performed on a holdout dataset.
Deployment and monitoring
Once the model has been trained and evaluated, it must be deployed into
production. This involves integrating the model into existing systems and
workflows and monitoring its performance over time. | scraping/output/-4005684865848025300.txt | [
0.05294478312134743,
-0.032066453248262405,
-0.0461982861161232,
-0.055581849068403244,
0.04415997117757797,
-0.05269942805171013,
0.008630866184830666,
0.023278916254639626,
0.08583465963602066,
-0.021534856408834457,
0.012330783531069756,
0.04203014448285103,
0.0775628387928009,
-0.04145... |
Once the model has been trained and evaluated, it must be deployed into
production. This involves integrating the model into existing systems and
workflows and monitoring its performance over time.
## Related posts
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Foundation Models
Corporate
People
Structured Data
Chat GPT
Sustainability
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Infrastructure
Hardware & sensors
MLOps
Generative AI
Natural language processing
Computer vision
contact us
## Connect with our AI experts in Structured Data
Contact us to turn your structured data into valuable insights that help you
solve a variety of problems
Name *
Company *
Email *
How did you hear about us? | scraping/output/-4005684865848025300.txt | [
0.012203925289213657,
-0.012177247554063797,
-0.021988891065120697,
-0.036054521799087524,
0.05940008908510208,
-0.039414700120687485,
0.07379407435655594,
0.050658293068408966,
0.064545176923275,
-0.017430095002055168,
0.024695750325918198,
0.041928730905056,
0.04002254828810692,
-0.04366... |
Contact us to turn your structured data into valuable insights that help you
solve a variety of problems
Name *
Company *
Email *
How did you hear about us?
How can we help you?Select one...I'm interested in getting strategic adviceI'm
looking to build a solutionI want to build strategic AI assets I'm trying to
understand how my company can get started with AI
Message
We need your details to contact you. Your data will be processed according to
the provisions of our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions | scraping/output/-4005684865848025300.txt | [
0.006920916959643364,
-0.012559887021780014,
-0.05636494234204292,
-0.025022635236382484,
0.02892848290503025,
-0.03676657751202583,
0.06644997000694275,
0.08903508633375168,
0.06345079839229584,
-0.03233177214860916,
0.03457307443022728,
0.03388481214642525,
0.028421906754374504,
-0.02411... |
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/-4005684865848025300.txt | [
0.010457908734679222,
-0.02084244415163994,
-0.03322838619351387,
-0.046558063477277756,
0.043493013828992844,
-0.05428045615553856,
0.06987712532281876,
0.10118203610181808,
0.059176042675971985,
-0.02680317685008049,
0.052889104932546616,
0.04530050605535507,
0.05309084430336952,
-0.0252... |
Structured Data
MLOps
Computer vision
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
Blogposts
All Posts
Life Sciences & Healthcare
April 28, 2021
# Pharma 4.0 : Impact drug manufacturing with AI in Life Science Industries
Contributors
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received! | scraping/output/-4422453358527678687.txt | [
0.026202252134680748,
-0.0060574971139431,
-0.05604139715433121,
-0.024517742916941643,
0.05307262763381004,
-0.04855877161026001,
0.07483196258544922,
0.10424983501434326,
0.04511694610118866,
-0.021356653422117233,
0.021765965968370438,
0.038233041763305664,
0.032450929284095764,
-0.0542... |
Contributors
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post
In this blog, we will uncover some pressing challenges within the Life Science
manufacturing industry and how breakthroughs in Machine Learning techniques
offer measurable returns.
### From batch to continuous manufacturing and pharma 4.0 | scraping/output/-4422453358527678687.txt | [
0.03493789955973625,
-0.019870180636644363,
-0.05223330110311508,
-0.048575982451438904,
0.05849046632647514,
-0.03904496505856514,
0.05492504686117172,
0.07556261122226715,
0.08951527625322342,
-0.011737953871488571,
0.014471671544015408,
0.04108024388551712,
0.04640252888202667,
-0.05429... |
Pharmaceutical companies are often in a race against time. Although patents
protect companies intellectual property, most of this time is spent turning an
idea into a marketable product. Traditionally medicines are produced in the
old-fashioned way by a batch process [3]. This traditional batch process has
proven to have high lead times, because after each process step the production
is stopped to test for quality assurance. Sometimes these materials are stored
in containers or shipped to other facilities before they continue in the
process [5]. Each stop increases the lead time and can cause defects and scrap
[6]. As an indication, lead times can be up to 365 days of which 228 are
dedicated to drug substance production, 75 to drug product formulation and 41
to packaging. Inventories including raw-material storage can last 250 days
[1]. Reducing these times is essential to recover the billions spent in drug | scraping/output/-4422453358527678687.txt | [
0.03846721723675728,
-0.03349101543426514,
-0.03781268000602722,
-0.04192943125963211,
0.006990746129304171,
-0.06597133725881577,
0.023740801960229874,
0.04210909083485603,
0.10747627168893814,
-0.006799875758588314,
0.03984856233000755,
0.0602167472243309,
0.07030946016311646,
-0.0671645... |
to packaging. Inventories including raw-material storage can last 250 days
[1]. Reducing these times is essential to recover the billions spent in drug
development given the fact that there only a few years left before the patents
are expired. | scraping/output/-4422453358527678687.txt | [
0.005397130735218525,
-0.0031892242841422558,
-0.008443324826657772,
-0.07853040844202042,
0.04289848729968071,
-0.06863204389810562,
0.004000568296760321,
0.05766260623931885,
0.13272209465503693,
-0.008510034531354904,
0.002906679641455412,
0.058107003569602966,
0.058239955455064774,
-0.... |
The pharmaceutical industry is often compared to the semiconductor industry
due to the high costs and the need for high throughput, volume and yield in a
clean environment with high consistency [2]. The semiconductor industry is
already quite matured when it comes to implementing industry 4.0 and this has
resulted in major advancement in technology (i.e. smaller chips with greater
capabilities in computers, phones, etc.). But what is the state of Pharma 4.0?
How is this industry moving towards the future? When compared to the
semiconductor industry there is a difference in the demanding regulations
enforced by authorities like the US FDA and the EU Commission to ensure the
quality. Changes to production in the form of digitization can result in
changes to machines, processes and even the product itself. It is these strict
regulations which are a possible cause of the conservative character of the
industry when compared to the semiconductor industry. | scraping/output/-4422453358527678687.txt | [
0.07202288508415222,
-0.04822079464793205,
-0.06505473703145981,
-0.062194012105464935,
0.048094894737005234,
-0.005715702194720507,
0.058454543352127075,
0.02826247736811638,
0.07648531347513199,
0.0007555537158623338,
0.006328608840703964,
-0.01046066079288721,
0.07971973717212677,
-0.09... |
Today, next to the stringent manufacturing requirements, the industry is
entering an era of smaller batches and personalized medicine. Medicine is
designed with more unique features and needs to be delivered quicker to
patients in need [6]. In other words, drug production requires very small
batches often measuring in sub-liters tuned to individualized genomes. [1]. | scraping/output/-4422453358527678687.txt | [
-0.024653470143675804,
-0.03022857941687107,
-0.08231545239686966,
-0.06388673186302185,
0.02751162461936474,
-0.01854415237903595,
-0.0021123031619936228,
0.04015914350748062,
0.06054815649986267,
-0.002216987544670701,
0.02467217482626438,
0.033070582896471024,
0.07892642170190811,
-0.04... |
To allow smaller batches and more cost effective production of drugs, the
industry is changing to continuous flow manufacturing as shown in Figure 1.
Small amounts of chemical ingredients flow without disruption from raw
ingredients to tablet. In 2016 the FDA encouraged manufacturers to transition
from batch to continuous manufacturing due to its many benefits [6]. This new
method has the potential to cut drug manufacturing times by 90% and costs by
30-50% according to Novartis [9] or gain an additional $50 billion in annual
revenue [1]. However, it required to have a seamless process integration and
full process control, which can be achieved by operational data and
automation. Leading pharma giants have already been working for multiple years
on continuous manufacturing e.g. Johnson & Johnson Janssen won the FDA’s
approval to switch from batch to continuous manufacturing [10] and Novartis
entered a 10-year research collaboration program with the Massachusetts | scraping/output/-4422453358527678687.txt | [
0.04979177564382553,
-0.04929521307349205,
-0.0399899035692215,
-0.041520748287439346,
0.04049331322312355,
0.010275034233927727,
0.04981331154704094,
0.007370120845735073,
0.09023912996053696,
0.018247459083795547,
0.05422171577811241,
0.016240034252405167,
0.06676587462425232,
-0.0705256... |
approval to switch from batch to continuous manufacturing [10] and Novartis
entered a 10-year research collaboration program with the Massachusetts
Institute of Technology (MIT) in 2007 [11]. | scraping/output/-4422453358527678687.txt | [
0.024233829230070114,
-0.012506239116191864,
-0.05282771587371826,
-0.03712654113769531,
0.07275093346834183,
-0.019599080085754395,
0.050747018307447433,
0.03600195050239563,
0.10067781805992126,
0.04457705095410347,
0.06541246175765991,
0.010617592372000217,
0.05044921860098839,
-0.01165... |
Figure 1: Conceptual continuous manufacturing process compared to a typical
batch process for the pharmaceutical industry by Lee et al. [8] left and right
is a Novartis vision of continuous manufacturing in cooperation with MIT [11]
### So how can AI help with drug manufacturing?
No matter if you have a batch process or a continuous manufacturing process,
AI projects require data and a process which is repetitive, has a measurable
outcome and a certain uncertainty in order to be successful, as discussed in
ML6s blog on how to boost your manufacturing process with AI [12]. Data can be
in several forms, from microscopic lab data, to visual inspection data to IOT
data gathered from the machine and visualized real-time in dashboards. Tips
and tricks on the latter can be found in the following blogs [13] and [14]. | scraping/output/-4422453358527678687.txt | [
0.00597317423671484,
-0.05727269500494003,
-0.06029496714472771,
-0.06922750920057297,
0.05156366527080536,
-0.01259863656014204,
0.05675515905022621,
0.044688690453767776,
0.07491543889045715,
-0.019748186692595482,
0.03680877387523651,
0.013917302712798119,
0.07440239936113358,
-0.046885... |
As discussed above, continuous manufacturing processes require ultimate
process and quality control. One of the advantages AI can offer is connecting
the sensor data to lab results as depicted in Figure 2. Thanks to the use of
open standards, such as OPC-UA on PLCs, incorporating machines and lines from
different brands allows for easy access to your machine data. The data can be
stored in a suitable database such as InfluxDB or Prometheus. Instead of
manually testing a few samples per X hours, predictive algorithms can predict
lab-results using real-time data for every single batch or drug.
Figure 2: Use machine learning to predict the quality of your drug for every
single batch.
| scraping/output/-4422453358527678687.txt | [
0.03960144892334938,
-0.07923524081707001,
-0.05053993687033653,
-0.029663579538464546,
0.05232728272676468,
-0.031385380774736404,
0.0495636910200119,
0.08374761044979095,
0.0699927806854248,
-0.012928279116749763,
0.035101838409900665,
0.011842546053230762,
0.11339327692985535,
-0.048945... |
Figure 2: Use machine learning to predict the quality of your drug for every
single batch.
Next to quality control by IOT data, one can use machine learning algorithms
to automate visual inspection of medicine foil strips checking for container
and closure, information which is written on the label (such as brand name,
ingredient names, manufacturer name and logo, batch/lot number, expire data
etc.), and physical characteristics of the tablets/capsules (such as
uniformity of shape, size, colour, texture, breaks, cracks, splits, markings,
empty capsules etc.). Even on a microscopic scale you can e.g. classify
microorganisms in Pharmaceutical Microbiology or particle size detection. A
common technique used in Machine Learning is image segmentation to distinguish
common objects or microscopic particles as shown in Figure 3. | scraping/output/-4422453358527678687.txt | [
0.05961781367659569,
-0.012627270072698593,
-0.05624266341328621,
-0.04203486442565918,
0.05652507022023201,
-0.022416546940803528,
0.032643407583236694,
0.045940518379211426,
0.05725446343421936,
-0.03259136900305748,
0.037873972207307816,
0.02064111828804016,
0.08709496259689331,
-0.0470... |
Figure 3: Image segmentation techniques for common objects shown in the left
by Lin et al. [16] and applied on an image of a Scanning Electron Microscope
(SEM)
The first step of improving efficiencies is to create insight in your
manufacturing machine data. An example is the Bosch Pharma i 4.0 Starter
Edition [17] which offers condition monitoring, event tracking and measuring
Overall Equipment Effectiveness (OEE). Similar dashboards can be created
customized using open source technologies such as InfluxDB and Grafana [13].
Historical data provides the possibility to do anomaly detection or predictive
maintenance. Note that analysis on track and trace data for traceability
purposes can also be performed to make sure you are fully compliant to
regulations. Next predictive algorithms can be developed to create foresight
to operators or higher management. | scraping/output/-4422453358527678687.txt | [
0.10006949305534363,
-0.07541494816541672,
-0.04769694432616234,
-0.030013354495167732,
0.041888277977705,
-0.03491721302270889,
0.06451785564422607,
0.050786342471838,
0.0087934834882617,
-0.025833943858742714,
0.02103550173342228,
0.019476652145385742,
0.09468165785074234,
-0.06529746204... |
Countless parameters in a manufacturing process can be mapped and optimized.
Doing this manually is a tedious job and basically impossible due to the
amount of possibilities and varying conditions. Using machine learning, you
can implement a full autonomous parameter optimizer using a self-learning
system to find the optimal solution in every circumstance to increase
efficiency. Sample use cases are the improvement of a powertrain on an
electrical bike and reducing the energy consumption of a datacenter, and this
can be applied to drug manufacturing as well [12]. Improving production,
quality and safety days while reducing consumption of raw materials. A
schematic outline is shown in Figure 4 of such a self-learning algorithm.
Figure 4: Use machine learning to improve manufacturing efficiencies and
optimize the production process
| scraping/output/-4422453358527678687.txt | [
0.04449283331632614,
-0.05417812988162041,
-0.0649721696972847,
-0.06605808436870575,
0.06300345063209534,
-0.014537225477397442,
0.03494173288345337,
0.029543107375502586,
0.05524083599448204,
-0.032635726034641266,
0.019439850002527237,
-0.018243195489048958,
0.09236644208431244,
-0.0950... |
Medicine often have different lead times and specific transport or storage
requirements. Specific requirements are: drugs which require temperature
control resulting in limited shelf life; flammable or explosive drugs which
need to be handled carefully; narcotics or psychotropic drugs which require
close monitoring due to stringent regulations etc. These requirements can
cause complexities when it comes to storing and transportation or finding the
optimal planning and demand forecasting. Using machine learning techniques one
can optimize the planning and forecasting both inside and outside
manufacturing facilities. An example of a self-learning algorithm can be found
in Figure 5 where the optimal route is found for a single driver in an
environment with no obstacles (left) or a multi driver optimization problem
with multiple obstacles. The transport/storage requirements act as the
obstacles in this example. Note that similar AI techniques can be used for | scraping/output/-4422453358527678687.txt | [
0.03425135090947151,
-0.053643155843019485,
-0.06344946473836899,
-0.032563839107751846,
0.06965966522693634,
-0.06723389029502869,
0.02478315494954586,
0.04894932731986046,
0.05825768783688545,
0.0025688924361020327,
0.009193235076963902,
-0.005416444502770901,
0.041316356509923935,
-0.08... |
with multiple obstacles. The transport/storage requirements act as the
obstacles in this example. Note that similar AI techniques can be used for
demand forecasting in your supply chain. | scraping/output/-4422453358527678687.txt | [
-0.007463349029421806,
-0.036901555955410004,
-0.05644602328538895,
-0.07120092213153839,
0.05167773738503456,
-0.04182010889053345,
0.021763036027550697,
0.0228702612221241,
0.07351075857877731,
0.033637527376413345,
0.014402808621525764,
0.010285652242600918,
0.06623206287622452,
-0.0278... |
Figure 5: Route optimization using machine learning techniques can be applied
both for optimizing planning inside manufacturing facilities and for supply
chain purposes such as demand forecasting
More information on how to apply AI in drug manufacturing can be found in this
video.
### On a final note...
To enhance time to market and quicken development time of AI solutions, a
comparison can be made with the semiconductor industry where “ASML boosts
engineering speed, strengthens security, accelerates time to market, and
enhances competitive advantage by adding Google Cloud to its on-premises
machine learning solutions”. Examples like these show how the pharmaceutical
industry can catch up on the semiconductor industry and move to the industry
4.0 which can result in great technological achievements, increased profit and
reduced time to markets.
## Related posts
View all
Placeholder tag | scraping/output/-4422453358527678687.txt | [
0.04087342694401741,
-0.051853083074092865,
-0.05853874236345291,
-0.024801336228847504,
0.040023449808359146,
-0.016425397247076035,
0.04467960447072983,
0.05283096805214882,
0.05740872398018837,
-0.0010804927442222834,
0.022893236950039864,
-0.004510829225182533,
0.03012414462864399,
-0.... |
## Related posts
View all
Placeholder tag
Leveraging Artificial Intelligence for insight-driven commercial models in
life sciences
In this blogpost, we’ll explain how AI can help solve typical challenges in
the commercial model of life sciences companies.
April 26, 2021
By
Sven Rymenans
Life Sciences & Healthcare
Large Language Model
Large Language Model
Foundation Models
Foundation Models
Corporate
Corporate
People
People
Structured Data
Structured Data
Chat GPT
Chat GPT
Sustainability
Sustainability
Voice & Sound
Voice & Sound
Front-End Development
Front-End Development
Data Protection & Security
Data Protection & Security
Responsible/ Ethical AI
Responsible/ Ethical AI
Infrastructure
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision
Accelerating businesses with AI technology & experts
Contact: | scraping/output/-4422453358527678687.txt | [
0.03169091045856476,
-0.010703749023377895,
-0.045127689838409424,
-0.05659390240907669,
0.06933030486106873,
-0.028586460277438164,
0.04589569568634033,
0.06775832176208496,
0.03942789137363434,
-0.004883650224655867,
0.03002648986876011,
0.03349263593554497,
0.013797498308122158,
-0.0547... |
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/-4422453358527678687.txt | [
0.015403201803565025,
-0.02116716094315052,
-0.039859529584646225,
-0.05510348454117775,
0.046362537890672684,
-0.05647805333137512,
0.07218355685472488,
0.080094113945961,
0.049802012741565704,
-0.03156984969973564,
0.03237298130989075,
0.034480780363082886,
0.030711987987160683,
-0.03649... |
Structured Data
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
Blogposts
All Posts
Life Sciences & Healthcare
April 26, 2021
# Leveraging Artificial Intelligence for insight-driven commercial models in
life sciences
Contributors
Sven Rymenans
Sales Consultants
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received! | scraping/output/5022103063578893462.txt | [
0.024382544681429863,
0.005363360978662968,
-0.05773138627409935,
-0.03095478005707264,
0.04546597972512245,
-0.057437244802713394,
0.0834149420261383,
0.09212978184223175,
0.04300999268889427,
-0.024095306172966957,
0.019310329109430313,
0.03676223009824753,
0.038253024220466614,
-0.04538... |
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post
### Introduction
In an Accenture survey (1), more than 90% of life sciences executives
recognized artificial intelligence as important in driving innovation and
achieving outcomes such as hyper-personalized experiences, new sources of
growth, and new levels of efficiency. As their market pivots to the patient
with more individualized treatments and value-based models, life sciences
companies must also shift their commercial models. | scraping/output/5022103063578893462.txt | [
0.028464768081903458,
0.0033759879879653454,
-0.056559063494205475,
-0.05810612812638283,
0.05247226729989052,
-0.006466354243457317,
0.05137030407786369,
0.06786741316318512,
0.06620209664106369,
-0.01385913323611021,
0.027326229959726334,
0.04717651382088661,
0.0698341354727745,
-0.06175... |
Traditionally, the lion’s share of pharma promotional strategy and investment
has been focused on the interactions between the HCP (HealthCare Practitioner)
and sales representative. Other promotional channels are meetings and events,
service team calls, inside sales, digital, educational activities, etc. For
sales organisations, it’s hard to measure the impact of each of these channels
and how they influence each other. Also, defining the most effective channel
mix for a specific HCP is not easy.
Through segmentation and targeting, commercial teams aim to tailor their
efforts to groups of similar HCPs but this is often based on limited
information, leading to inaccurate and too broadly defined HCP segments. No
surprise that 1 out of 2 life sciences commercial leads said they don’t have a
good understanding of what their customers need and want. | scraping/output/5022103063578893462.txt | [
0.035730406641960144,
-0.007435369770973921,
-0.08244344592094421,
-0.03141963481903076,
0.05488051846623421,
-0.026495546102523804,
0.08496620506048203,
0.06418479979038239,
0.06162368133664131,
0.022953659296035767,
0.010293259285390377,
0.00459818122908473,
0.055901192128658295,
-0.0944... |
The AI revolution on the commercial side of the pharma business has been
slower on the uptake than on the R&D side, but we see great opportunities to
improve the commercial model through AI in many ways, of which 2 of them we’ll
detail in this blogpost.
### Segmentation & Targeting | scraping/output/5022103063578893462.txt | [
0.026944434270262718,
-0.036580100655555725,
-0.07300116866827011,
-0.09346894919872284,
0.07676377147436142,
-0.020954260602593422,
0.04774009808897972,
0.023616038262844086,
0.06268075108528137,
0.022585364058613777,
0.0606764480471611,
0.029967207461595535,
0.06275289505720139,
-0.05155... |
### Segmentation & Targeting
Commercial organisations in Life Sciences often use HCP information like the
number of patients treated for a specific disease, or the % of adoption to its
product as a way to segment HCPs. A classical segmentation could be Gold-
Silver-Bronze, with Gold referring to HCPs that treat more than 20 patients
with a specific disease per week. Silver means 10 to 20 patients with that
disease per week, and bronze means 0 to 10 patients. As a consequence, sales
representatives are incentivized to visit Gold HCPs more frequently than
Silver HCPs which they visit more frequently than bronze HCPs. This example
shows that only a limited amount of information is used to segment the HCP
population, and that a one-size fits all approach (at least per segment) is
used. With AI, we can radically change this approach, and create a segment-of-
one for each HCP. Curious to know how we do that? | scraping/output/5022103063578893462.txt | [
0.03404487296938896,
-0.026942815631628036,
-0.06812426447868347,
-0.05744053050875664,
0.065489761531353,
-0.01777687482535839,
0.03973816707730293,
0.06919458508491516,
0.06433184444904327,
0.007999447174370289,
0.026326293125748634,
-0.017020586878061295,
0.07116569578647614,
-0.0472660... |
Well, in order to address this we use what is called an embedding space. For
the non-techies reading this post, please bear with me for just a few seconds.
An embedding is a relatively low-dimensional space into which you can
translate high-dimensional vectors. Embeddings make it easier to do machine
learning on large inputs like sparse vectors representing words. Okay, so far
for the definition, but how can this help me improve my segmentation and
targeting you might ask. In essence this technique allows the use of all
available data about HCPs, in any format (full text, database, time series,
image, spoken text, etc.). Think of conversations, interactions with your HCP
portal, click behaviour on your website, specialty, potential, adoption,
interviews, medical facility, age, hobbies, geography, etc. For each of these
‘dimensions’ the embedding space maps the values (e.g. age number) on an axes
to distinguish between HCPs. The illustration below gives 2 examples of what | scraping/output/5022103063578893462.txt | [
0.03426620736718178,
-0.025927681475877762,
-0.07583699375391006,
-0.042659860104322433,
0.053548503667116165,
-0.035312239080667496,
0.06024966388940811,
0.05677986890077591,
0.09162303805351257,
-0.014628797769546509,
0.019704638049006462,
0.010987935587763786,
0.05593230575323105,
-0.05... |
‘dimensions’ the embedding space maps the values (e.g. age number) on an axes
to distinguish between HCPs. The illustration below gives 2 examples of what
this looks like for a combination of 2 dimensions (e.g. gender and royalty in
the left example). | scraping/output/5022103063578893462.txt | [
0.03892716020345688,
-0.005917935166507959,
-0.0695752203464508,
-0.07261180877685547,
0.07162145525217056,
-0.023074420168995857,
0.06904297322034836,
0.09962733834981918,
0.07242950797080994,
0.0022652128245681524,
0.030819721519947052,
0.0036926805041730404,
0.0841287225484848,
-0.03060... |
The illustrations show this in a 3 dimensional space, as this is the maximum
number of dimensions that can be visually illustrated, but in fact the number
of dimensions used in the embedding space can grow to infinity. But let’s
stick with the 3D visualisation for simplicity. So by doing this exercise with
all data at hand, every HCP will be given specific coordinates in the
embedding space. The closer that two HCPs are located in this embedding space,
the more similar they are. | scraping/output/5022103063578893462.txt | [
0.04938359558582306,
-0.025914665311574936,
-0.058653492480516434,
-0.06385321170091629,
0.11051713675260544,
-0.03318605199456215,
0.025797603651881218,
0.08340102434158325,
0.07474884390830994,
-0.02344413846731186,
0.016189977526664734,
0.032508160918951035,
0.11108089238405228,
-0.0556... |
So for example, a general practitioner living in location ABC might appear to
be very similar to a pneumologist in location XYZ, because they attended the
same university together, are of young age so prefer digital channels,
practice the same hobbies and both frequently attend conferences. These two
HCPs should be targeted in the same way based on the information coming from
the embedding space, whereas traditional methods would target them differently
based on limited information (e.g. only potential).
We at ML6 applied and benchmarked this technique of hyper personalisation at a
multinational company and outperformed the other techniques by 150%.
### Commercial Execution | scraping/output/5022103063578893462.txt | [
0.04781324043869972,
-0.0188412107527256,
-0.09225166589021683,
-0.054792195558547974,
0.06467225402593613,
-0.028954796493053436,
0.05953260511159897,
0.07214119285345078,
0.11127530783414841,
0.013368811458349228,
0.05527495592832565,
0.003535524243488908,
0.0983293429017067,
-0.05269007... |
We at ML6 applied and benchmarked this technique of hyper personalisation at a
multinational company and outperformed the other techniques by 150%.
### Commercial Execution
Analysis of the responsiveness of sales to promotional activities can be done
through the smart use of data. Measuring brand sensitivity to promotion prior
to investment decisions or considering implementation of new channels is a
crucial step to maximize return on investment. Typical questions that arise
are: “Which channels contribute significantly to the brand sales? What are my
incremental sales per extra unit of investment? What is the optimal point of
investment? What are the major sales drivers? How do sales drivers vary across
regions or across promotional channels? What is the level of carry-over for a
brand (base)? What is the optimal activity mix? etc.” | scraping/output/5022103063578893462.txt | [
0.06827304512262344,
-0.0010577363427728415,
-0.05454287678003311,
-0.050073832273483276,
0.04279233142733574,
-0.03816237673163414,
0.044023774564266205,
0.015319841913878918,
0.06505382806062698,
-0.018355093896389008,
0.028929797932505608,
-0.0041907294653356075,
0.07833123207092285,
-0... |
To answer these questions we make use of the unobserved components (UCM) time
series model. This model was first introduced to the econometrics and
statistics fields by A.C Harvey (1989). UCM can be considered to be a multiple
regression model with time varying coefficients. It is based on the principles
that it is useful to view time series as being decomposable into a trend,
seasonal and cycle component.
Advanced modeling techniques (State Space modeling) are used to isolate,
quantify and optimize the short-term impact of promotional activities on
Sales.
#### Input data:
#### Model decomposition: | scraping/output/5022103063578893462.txt | [
0.057059746235609055,
-0.022419191896915436,
-0.02933967113494873,
-0.07137147337198257,
0.07179667800664902,
-0.026082558557391167,
0.04133908450603485,
0.023293916136026382,
0.08613713085651398,
0.029834697023034096,
0.03624448552727699,
-0.000800292007625103,
0.06355579942464828,
-0.072... |
#### Input data:
#### Model decomposition:
Our model takes into consideration the carry-over effect (e.g. my weight this
year is impacted by my weight at the beginning of the year (starting point)
and how it evolved the years before), seasonal trends (e.g. ice cream sells
better in summer than winter) and other known parameters (short and long
term). It also accounts for the fact that part of the generated sales can be
attributed to patients initiating treatment at a hospital continue using the
prescribed drug after discharge from hospital, i.e. the hospital spill-over
effect. This is all represented in the ‘Base’ (grey area in the model chart).
This technique allows to account for the parameters that have influenced sales
which we are not aware of or we do not have an accurate way to track them,
e.g. competitor incentives. Therefore, sales impact is not misattributed to
other channels such as e.g. traditional calls. | scraping/output/5022103063578893462.txt | [
0.019237760454416275,
-0.019756998866796494,
-0.026549870148301125,
-0.04922676086425781,
0.05176098644733429,
0.02555154077708721,
0.06309366971254349,
0.04194570332765579,
0.0470147579908371,
-0.016410348936915398,
0.0274917334318161,
0.030289188027381897,
0.06826930493116379,
-0.0553412... |
Next to that, the model also accounts for what is called the memory (ad-stock
effect). This refers to the impact that marketing activities have over time on
sales or brand health: It captures how response to advertising builds and
decays in consumer markets. This concept agrees with common sense that the
awareness level of a new exposure will be higher if there have been exposures
in the fairly recent past and lower if there have not been. Finally, the delay
in impact of an activity and diminishing returns over time are accounted for
as well.
The model shows the impact of each promotional channel on sales, but can also
be used to analyse responsiveness to additional investment in a promotional
channel, which we will detail in a following blogpost.
### Conclusion | scraping/output/5022103063578893462.txt | [
0.03191141039133072,
-0.028454942628741264,
-0.04936477914452553,
-0.050332698971033096,
0.06337978690862656,
-0.006432295776903629,
0.06427323073148727,
0.043827787041664124,
0.07953479140996933,
-0.015917621552944183,
0.0410706102848053,
0.018476586788892746,
0.09276868402957916,
-0.0525... |
### Conclusion
The fact that science and technology are converging to enable more
personalized, precise treatments for patients should also trigger sales and
marketing professionals to apply similar techniques for more precise targeting
and more effective commercial efforts. The current state of modelling
techniques and hyper personalisation allows to radically improve sales and
marketing operations for Life Sciences companies.
More information on how to apply AI in drug sales & marketing can be found in
this video.
## Related posts
View all
Placeholder tag
Pharma 4.0 : Impact drug manufacturing with AI in Life Science Industries
In this blog, we will uncover some pressing challenges within the Life Science
manufacturing industry and how breakthroughs in Machine Learning techniques
offer measurable returns.
April 28, 2021
By
Life Sciences & Healthcare
Large Language Model
Large Language Model
Foundation Models
Foundation Models
Corporate
Corporate | scraping/output/5022103063578893462.txt | [
0.054014865309000015,
0.0024204833898693323,
-0.056764207780361176,
-0.07994212955236435,
0.056009139865636826,
-0.03658250346779823,
0.04219396039843559,
0.0688943937420845,
0.04830116778612137,
-0.025056319311261177,
0.024482857435941696,
0.024827027693390846,
0.08142279833555222,
-0.090... |
April 28, 2021
By
Life Sciences & Healthcare
Large Language Model
Large Language Model
Foundation Models
Foundation Models
Corporate
Corporate
People
People
Structured Data
Structured Data
Chat GPT
Chat GPT
Sustainability
Sustainability
Voice & Sound
Voice & Sound
Front-End Development
Front-End Development
Data Protection & Security
Data Protection & Security
Responsible/ Ethical AI
Responsible/ Ethical AI
Infrastructure
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services | scraping/output/5022103063578893462.txt | [
0.022419558838009834,
0.005497604608535767,
-0.03656322509050369,
-0.03661501780152321,
0.06175434589385986,
-0.036017075181007385,
0.07218607515096664,
0.07561761140823364,
0.05473307520151138,
-0.027158575132489204,
0.03218473121523857,
0.022335346788167953,
0.036257438361644745,
-0.0600... |
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/5022103063578893462.txt | [
0.011886991560459137,
-0.019043078646063805,
-0.027456948533654213,
-0.04586685448884964,
0.03762790933251381,
-0.05721539258956909,
0.0641404241323471,
0.10131411254405975,
0.054755933582782745,
-0.0258928332477808,
0.06428021937608719,
0.05204169824719429,
0.048060305416584015,
-0.023868... |
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
PressML6 haalt Financial Times 1000-lijst van snelst groeiende Europese
bedrijven
March 29, 2023
# ML6 haalt Financial Times 1000-lijst van snelst groeiende Europese bedrijven
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form. | scraping/output/5921266941805258048.txt | [
0.023059368133544922,
-0.010051912628114223,
-0.057138651609420776,
-0.02593931183218956,
0.04340413957834244,
-0.06185220554471016,
0.06099656596779823,
0.10006290674209595,
0.043858062475919724,
-0.020589599385857582,
0.028350340202450752,
0.0426839254796505,
0.04245397076010704,
-0.0340... |
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/5921266941805258048.txt | [
0.011446142569184303,
-0.022761145606637,
-0.03808612748980522,
-0.04316769912838936,
0.028645263984799385,
-0.06004748493432999,
0.07192909717559814,
0.0873197540640831,
0.06074044480919838,
-0.02992297150194645,
0.05053955689072609,
0.042919453233480453,
0.03523535653948784,
-0.030227685... |
MLOps
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
Blogposts
All Posts
August 31, 2021
# Vertex Pipelines — Vertex AI vs AI Platform
Contributors
Liam Campbell
Data Engineer
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post | scraping/output/-2930323519026450185.txt | [
0.013683467172086239,
-0.011097545735538006,
-0.05006467178463936,
-0.03588751703500748,
0.06309115141630173,
-0.05018371716141701,
0.08930714428424835,
0.05669498071074486,
0.061511795967817307,
-0.022086389362812042,
0.032125286757946014,
0.03353182226419449,
0.025414155796170235,
-0.055... |
Share this post
Recently, Google unveiled their latest offering in ML Tools on their Google
Cloud Platform, Vertex AI. In brief, the new platform seeks to combine the
tools offered previously by separate services on GCP, such as AI Platform and
AutoML, into a single service. Integrating these previously separate services
brings benefits to users by giving them the ability to interact with all these
tools using a single set of APIs, SDKs, and Clients.
For a more detailed overview of the aims and features of Vertex AI as a whole,
check out this previous ML6 Blog Post. In this blog post, we will focus
primarily on Vertex AI’s answer to AI Platform Pipelines, Vertex Pipelines. ML
Pipeline technologies such as Vertex Pipelines are important to any MLOps Team
as tools for orchestrating the many moving parts of complex training and
prediction jobs at scale. They are a key piece of the infrastructure that
brings Machine Learning capabilities into a production setting.
| scraping/output/-2930323519026450185.txt | [
0.0030844812281429768,
-0.01948312669992447,
-0.05597216635942459,
-0.02386230044066906,
0.07571510225534439,
-0.0026650065556168556,
0.0749925747513771,
0.02811385877430439,
0.06650833785533905,
-0.0044480240903794765,
0.022717535495758057,
0.034561704844236374,
0.025994429364800453,
-0.0... |
### AI Platform Pipelines
Previously in AI Platform, Google’s former Machine Learning platform, we had
AI Platform Pipelines. This was a service aimed at making it easy to deploy
Kubeflow Pipelines, the MLOps Pipeline toolkit from Kubeflow, to Google Cloud
Platform resources. The workflow for deploying a Kubeflow Pipeline with AI
Platform looked something like the following;
Steps to deploying Kubeflow on AI Platform
#### 1\. Set Up Kubernetes Cluster with Google Kubernetes Engine
The first step in deploying Pipelines to AI Platform was setting up a cluster
on which to host our Kubeflow Pipelines Client. Although here at ML6 would
always caution our clients to automate their infrastructure with tools like
Terraform, provisioning a cluster with GKE was made very easy via neat user
interfaces in the GKE Console.
#### 2\. Deploy Kubeflow Client to GKE Cluster
| scraping/output/-2930323519026450185.txt | [
-0.005211696494370699,
-0.011798115447163582,
-0.08038246631622314,
-0.04140852391719818,
0.08718986809253693,
-0.032612551003694534,
0.03875058516860008,
0.03700217604637146,
0.024856949225068092,
-0.03145437687635422,
0.037941623479127884,
0.015969689935445786,
0.00034681105171330273,
-0... |
#### 2\. Deploy Kubeflow Client to GKE Cluster
With our cluster up and running, we could easily deploy Kubeflow Pipelines
instances to it using the AI Platform Pipelines UI in the GCP console.
Creating a new deployment was as simple as selecting your GKE Cluster from a
drop down list and filling out a few pieces of configuration in a simple form.
The default behavior was to use nodes in the Kubernetes cluster to host MySQL
and MinIO services, Kubeflow’s default for Artifact and Metadata storage, but
by providing connection details in setup GCS and Cloud Storage can be used as
more scalable and reliable alternatives.
#### 3\. Develop Pipelines with Notebooks
| scraping/output/-2930323519026450185.txt | [
-0.0009257773635908961,
-0.03555270656943321,
-0.06611965596675873,
-0.05635344982147217,
0.06632298231124878,
-0.03403763473033905,
0.03273298591375351,
0.07003999501466751,
0.011252004653215408,
-0.014105056412518024,
0.05112588405609131,
-0.0011167997727170587,
0.02110898867249489,
-0.0... |
#### 3\. Develop Pipelines with Notebooks
With the cluster setup and the Kubeflow instance created we could use the
Notebooks of AI Platform as secure development environments for working with
the Kubeflow Pipelines SDK to develop our pipelines. In AI Platform we are
simply using vanilla Kubeflow Pipelines tools on GCP resources, so all of the
standard Kubeflow SDK features would work exactly as if you spun up and
Kubeflow Pipelines instance on an on-premises Kubernetes Cluster.
#### 4\. Manage Pipelines and Runs in Kubeflow Client UI
The developed pipelines could then be uploaded to the Kubeflow client, where
you could see all previously uploaded pipelines, launch runs of these
pipelines, and view the DAG and outputs of ongoing and completed Pipeline
runs. Pipelines could also be uploaded, and runs started/ monitored via the
Kubeflow Client API using functions defined in the python SDK. | scraping/output/-2930323519026450185.txt | [
-0.008098217658698559,
-0.03500323370099068,
-0.048830632120370865,
-0.03644592687487602,
0.04635888710618019,
-0.033359505236148834,
0.052552513778209686,
0.034535665065050125,
0.032575950026512146,
-0.03981168195605278,
0.03991203382611275,
0.030576366931200027,
0.04793780297040939,
-0.0... |
This workflow made it very easy to work with Kubeflow Pipelines in Google
resources, with deployment taking 5 minutes (if you don’t include the time it
takes for GCP to spin up the resources in the background). Thanks to GKE,
Kubernetes cluster management was as easy as it had ever been, and thanks to
AI Platform Pipelines, deploying Kubeflow instances to those clusters was even
easier! Despite this, ML Teams still needed to have Kubernetes skills in order
to make informed decisions, properly configure their cluster, and generally
make the best use of AI Platform pipelines and the GKE Cluster it would be
deployed too.
### Vertex AI & Vertex Pipelines
One of the first things one might notice moving from AI Platform Pipelines to
Vertex Pipelines, is that this extrapolation of resource management away from
the user has continued, bringing with it the usual reduction in day-to-day
hassle managing configuration files. | scraping/output/-2930323519026450185.txt | [
0.020719187334179878,
-0.010305854491889477,
-0.08118902146816254,
-0.04350331053137779,
0.06944582611322403,
-0.04223685339093208,
0.03176400810480118,
0.026935793459415436,
0.05967973917722702,
-0.020568709820508957,
0.047678008675575256,
0.012001745402812958,
0.0010543555254116654,
-0.0... |
A big indicator of this is that users are no longer required to create a
dedicated Kubernetes cluster via GKE on which to run their Pipelines. Instead,
Vertex AI employs an apparently serverless approach to running Pipelines
written with the Kubeflow Pipelines DSL. Instead, the Kubernetes clusters and
the pods running on them are managed behind the scenes by Vertex AI.
In the screen shot below, which shows the Vertex Pipelines UI, you start to
get a sense for this approach. Instead of a store of pipelines and historic
runs, as you may be familiar with if you’ve used the Kubeflow Pipelines UI
before, we simply have a list of historic runs. Runs can be started by
uploading a Job Spec compiled from a pipeline script, either via the UI or the
Python Client. Here we start to get a feel for the ‘pipelines-as-a-service’
approach that Vertex Pipelines seems to be aiming for.
Vertex Pipelines UI
| scraping/output/-2930323519026450185.txt | [
0.0224870964884758,
-0.04098496213555336,
-0.05882413312792778,
-0.02439161203801632,
0.05149301141500473,
-0.0177522674202919,
0.049363307654857635,
0.018127331510186195,
0.0268888920545578,
-0.027088120579719543,
0.04625270888209343,
0.0057554771192371845,
0.03671477735042572,
-0.0451647... |
Vertex Pipelines UI
This also hints at another key conceptual difference between the two tools;
Vertex AI isn’t running an instance of a Kubeflow Client. Instead, Vertex
Pipelines is its own version of the kind of infrastructure usually provided by
Kubeflow Pipelines (ie, Container Workflow Orchestration), that can run
pipelines specified using the Kubeflow SDK.
A key benefit of this new approach is that Vertex Pipelines makes great use of
GCS for Artifact storage natively, and even employs its own metadata server in
the form of Vertex AI Metadata. Having these managed services in place by
default is definitely welcome, as in our experience, the default options in AI
Platform Pipelines (Kubernetes nodes and PVCs hosting MySQL and MinIO
services) don’t scale quite as well as their Google managed counterparts. | scraping/output/-2930323519026450185.txt | [
-0.019042089581489563,
-0.03135340288281441,
-0.04613009840250015,
-0.04347425699234009,
0.03636337071657181,
-0.0039508165791630745,
0.08761491626501083,
0.020618107169866562,
0.0328577384352684,
-0.022745301946997643,
0.03645795211195946,
-0.005550457630306482,
0.0373573936522007,
-0.031... |
Another benefit that users will welcome with the new approach is the reduction
in cost that is provided by the pay-as-you-go model that this ‘pipelines-as-a-
service’ approach is able to deliver. Instead of paying for the continuous
uptime of the necessary K8s Cluster, users will now only pay $0.03USD per run,
plus whatever Compute resources the pipeline consumes while it is running.
#### Kubeflow in Vertex Pipelines
Given this new approach to implementing Kubeflow Pipelines in Vertex AI, there
are some differences to note when developing workflows with the KFP SDK.
The first is that Vertex AI requires an entirely new version of the Kubeflow
SDK, version 2.0. This SDK comes bundled with versions of Kubeflow Pipelines
after v1.6, so with this version installed you are ready to start building SDK
v2.0 compliant pipelines. | scraping/output/-2930323519026450185.txt | [
0.00871786754578352,
-0.03549740090966225,
-0.03313988074660301,
-0.043238505721092224,
0.07151530683040619,
-0.011889508925378323,
0.04571564868092537,
0.04577489569783211,
0.05893072113394737,
-0.025809012353420258,
0.03373749181628227,
0.015660135075449944,
0.04327064007520676,
-0.05074... |
This new version of the SDK is designed primarily to make use of the Pipeline
Metadata and Artifact tracking tools of ML Metadata, an open source Metadata
tracking tool developed by the Tensorflow Extended team. Vertex AI implements
its own version of this in Vertex ML Metadata, which makes use of the base TFX
ML Metadata tool.
Whilst developing with the new version of the SDK will largely be the same as
the traditional Kubeflow SDK, there are a few differences that one will need
to keep in mind when working with the new standard. | scraping/output/-2930323519026450185.txt | [
0.03236827254295349,
-0.0496072992682457,
-0.0412529893219471,
-0.018247079104185104,
0.008690735325217247,
-0.008836272172629833,
0.04815232381224632,
0.025053037330508232,
0.008055544458329678,
-0.016411693766713142,
0.03957395255565643,
0.018126701936125755,
0.07949776947498322,
-0.0784... |
First, concerning building components, KFP SDK v2.0 mandates that all
component parameters be annotated with their data type. In addition, an extra
distinction is now made between Component inputs that are parameters, and
those that are artifacts. Component Parameters are those that can be passed as
string, integer, float, Boolean, dictionary or list types and are therefore
usually smaller pieces of data. Artefacts are larger pieces of data, for
example datasets or models, and are passed instead as a path referencing the
location of the Artifacts. Parameter values and artifact metadata can be
viewed in ML Metadata.
The difference between Artifacts and Parameters is really specified within the
component specification in the component.yaml files of our components. Below
we can see a basic component.yaml file as it may have looked with the old
version of the SDK. Below that, we have the component.yaml as it would look
under the new specification.
| scraping/output/-2930323519026450185.txt | [
-0.006127190310508013,
-0.02316276542842388,
-0.0439055897295475,
-0.051712341606616974,
0.015218392014503479,
-0.032209526747465134,
0.056904762983322144,
0.06858678162097931,
0.07147625088691711,
-0.012880616821348667,
0.05803670361638069,
0.004265260882675648,
0.026271725073456764,
-0.0... |
Old style component specification component.yaml file
New style component specification component.yaml file
Inspecting these component specifications carefully, one will notice that for
input values in the ‘command’ portion of the ‘implementation’, we previously
would have used `{inputValue: variable_name}` for Artifacts and Parameters. In
the new version, we specify Artifacts with `{inputPath: variable_name}` and
Parameters with `{inputValue: variable_name}`. | scraping/output/-2930323519026450185.txt | [
0.0363851822912693,
-0.03768565505743027,
-0.029211653396487236,
-0.040617723017930984,
0.04779202491044998,
-0.06580308824777603,
0.021908868104219437,
0.05751172825694084,
0.030146704986691475,
-0.0032851449213922024,
0.03687590733170509,
0.03212128207087517,
0.06169933080673218,
-0.0372... |
When building Pipelines, the new SDK version brings a couple of changes. The
first is that, as with components, pipeline parameter definitions must be
annotated with their data types. Second, pipelines must be decorated with the
`@kfp.dsl.pipeline` decorator. Within the Pipeline decorator we can specify
the pipeline name (The ID used for querying ML Metadata for information about
your run), description (which is optional), and pipeline_root, which specifies
the location in which to store pipeline outputs. The ‘pipeline_root’ parameter
is optional in Kubeflow Pipelines as it will use MinIO Artifact Storage if a
root is not defined. However, given that Vertex Pipelines will use GCS for
Artifact storage, it requires that ‘pipeline_root’ be specified (either within
the Pipeline decorator, or when calling the create_run_from_job_spec method of
the Python Client).
#### Kubeflow SDK v2.0 Limitations
| scraping/output/-2930323519026450185.txt | [
0.009053012356162071,
-0.024789847433567047,
-0.02309081330895424,
-0.02194826304912567,
0.03478991240262985,
-0.03722347319126129,
0.03865467384457588,
0.07112972438335419,
0.030271057039499283,
-0.005186820402741432,
0.028406821191310883,
0.0009793478529900312,
0.048849981278181076,
-0.0... |
#### Kubeflow SDK v2.0 Limitations
In addition to these SDK v2.0 considerations that users must keep in mind when
developing Kubeflow Pipelines for Vertex Pipelines, there are some additional
constraints given the practicalities of Vertex Pipelines’ implementation.
The first is caching of pipeline component executions. In Kubeflow Pipelines
we could specify that a cache of a component execution would expire after a
given amount of time; before which, components running with identical
configurations would use the cached output of previous executions. In Vertex
Pipelines, we can’t specify the time frame after which caches will expire, but
we can use the ‘enable_caching’ parameter of the create_run_from_job_spec
method of the client to enable/disable the use of caches in Vertex Pipeline
executions. | scraping/output/-2930323519026450185.txt | [
-0.000052366969612194225,
-0.03179730102419853,
-0.01795981451869011,
-0.06163046881556511,
0.06973209977149963,
-0.025183800607919693,
0.05563971400260925,
0.032947100698947906,
0.0593990683555603,
-0.008946362882852554,
0.008699621073901653,
0.023798080161213875,
0.04021346569061279,
-0.... |
In addition to caching, recursively called components is another feature of
Kubeflow Pipelines that Vertex Pipelines does not currently support. The
Google documentation on this does use the same language of ‘Currently, Vertex
Pipelines does not support..’, which would indicate that this is something
they are potentially looking to support in the future.
Another key difference between Kubeflow Pipelines and Vertex Pipelines is the
push to use more Google Managed resources such as GCS within your pipelines.
For example, in Vertex Pipelines, users can access GCS directly as though it
were a mounted volume of storage using Cloud Storage FUSE. By contrast,
previously in Kubeflow Pipelines, users interacted with Kubernetes resources
such as Persistent Volume Claims (PVCs). Another indicator of this is the host
of Google Cloud specific predefined components that have been released to
support interaction of pipelines and Google Cloud/ Vertex AI resources.
### Conclusion
| scraping/output/-2930323519026450185.txt | [
-0.015219527296721935,
-0.01599166914820671,
-0.05341564118862152,
-0.07724344730377197,
0.05751664564013481,
-0.0008539146510884166,
0.08254434168338776,
0.007660818286240101,
0.04977943003177643,
-0.03084079921245575,
0.007516523357480764,
-0.018416253849864006,
0.00854721199721098,
-0.0... |
### Conclusion
In summary, Vertex AI Pipelines introduces some nice changes over the previous
AI Platform Pipelines implementation that will overall make the experience of
developing and running MLOps workflows on GCP a lot easier. The move to make
the underlying resources more managed than in the previous solution is a
welcome one, simultaneously speeding up and simplifying the process of getting
up and running with Pipelines in GCP. It is worth noting that this product is
still in a kind of preview phase, however the key tools are already there to
start using this product and it certainly is a promising improvement on what
came before. For those still unsure of whether to use AI Platform or jump
straight in with Vertex Pipelines, I would recommend you give the new kid on
the block a chance.
## Related posts
View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Large Language Model
Foundation Models | scraping/output/-2930323519026450185.txt | [
0.00033622945193201303,
-0.021971890702843666,
-0.03856343775987625,
-0.0377490408718586,
0.08842991292476654,
-0.025958454236388206,
0.07335208356380463,
0.02413514442741871,
0.04163157939910889,
-0.0315975621342659,
0.01274205558001995,
0.030037399381399155,
0.016007287427783012,
-0.0700... |
## Related posts
View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Large Language Model
Foundation Models
Foundation Models
Corporate
Corporate
People
People
Structured Data
Structured Data
Chat GPT
Chat GPT
Sustainability
Sustainability
Voice & Sound
Voice & Sound
Front-End Development
Front-End Development
Data Protection & Security
Data Protection & Security
Responsible/ Ethical AI
Responsible/ Ethical AI
Infrastructure
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form. | scraping/output/-2930323519026450185.txt | [
0.02324005775153637,
-0.011104238219559193,
-0.03359821066260338,
-0.03462918847799301,
0.047241903841495514,
-0.04798899218440056,
0.07847342640161514,
0.06397713720798492,
0.06234085559844971,
-0.03169122338294983,
0.028923043981194496,
0.02344946190714836,
0.04333416372537613,
-0.045325... |
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/-2930323519026450185.txt | [
0.011886999011039734,
-0.019043099135160446,
-0.027456922456622124,
-0.04586681351065636,
0.03762790933251381,
-0.05721542239189148,
0.0641404539346695,
0.10131411999464035,
0.054755955934524536,
-0.025892818346619606,
0.064280204474926,
0.0520416758954525,
0.048060301691293716,
-0.0238688... |
Natural language processing
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
Blogposts
All Posts
October 5, 2022
# Hybrid Machine Learning: Marrying NLP and RegEx
Contributors
Matthias Feys
Q / CTO
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post
### Introduction | scraping/output/2077654063284246761.txt | [
0.030809788033366203,
-0.00630964245647192,
-0.05766475200653076,
-0.029936127364635468,
0.050582800060510635,
-0.0666118934750557,
0.08102422207593918,
0.07653879374265671,
0.05838725343346596,
-0.04663148522377014,
0.023012034595012665,
0.027373284101486206,
0.034582145512104034,
-0.0557... |
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post
### Introduction
When designing real-world NLP applications, you are often confronted with
limited (labeled) data, latency requirements, cost restrictions, etc. that
hinder unlocking the full potential of your solution.
A hybrid setup where you leverage domain knowledge to improve accuracy,
efficiency, reliability and/or interpretability would be perfect.
But figuring out how to design such a hybrid solution is far from evident.
Let’s browse through some common hybrid NLP design patterns and look at
example situations of when you should opt for which pattern.
#### (1) RULES VS ML
Pure ML-based design pattern
Pure rule-based design pattern
The first pattern we’ll consider is the adversarial case: you either choose a
pure rule-based or a pure ML-based solution.
Let’s consider some examples where these design patterns make a lot of sense:
| scraping/output/2077654063284246761.txt | [
0.016243038699030876,
0.007995717227458954,
-0.07782810181379318,
-0.01848754845559597,
0.04460596293210983,
-0.06129878759384155,
0.07361774891614914,
0.07690020650625229,
0.03764106705784798,
-0.009341628290712833,
0.01662038266658783,
0.01626923307776451,
0.062335386872291565,
-0.028399... |
Let’s consider some examples where these design patterns make a lot of sense:
Named Entity Recognition (NER): the choice for or against an ML-based approach
essentially boils down to how contextual the entities are.
For example, dates can be structured in a specific way (e.g, “DD/MM/YYYY”). If
an entity follows this format, it is a date and otherwise it isn’t. Thus, it
is a very “non-contextual entity”: i.e., a concrete fixed pattern determines
whether or not it is a date, independent of the context.
It is straightforward to extract these kinds of entities purely via simple
rules.
A simple RegEx rule can easily recognize both dates
| scraping/output/2077654063284246761.txt | [
0.06504043191671371,
-0.007630864158272743,
-0.06738138943910599,
-0.0459928959608078,
0.004781782161444426,
-0.0744992196559906,
0.050298698246479034,
0.02705615758895874,
0.03536907583475113,
0.0022690563928335905,
0.012861852534115314,
0.02875179797410965,
0.08863168954849243,
-0.070358... |
A simple RegEx rule can easily recognize both dates
However, say you only want to extract dates of birth and not other kinds of
dates. Now, we are dealing with a very “contextual entity”: dates of birth and
other kinds of dates look exactly the same; without any context, you wouldn’t
be able to distinguish between the two.
It is very difficult to extract these entities in a rule-based way so a pure
ML-based approach is the most appropriate.
A contextual language model predicts that only the first date is a date of
birth
Text classification: in text classification use cases, the underlying features
that determine which class a text belongs to are often very latent. As rule-
based systems don’t tend to perform well in these scenarios, a pure ML-based
design pattern is usually the way to go. The same goes for complex tasks such
as keyword extraction, text summarization, etc.
Some tasks are just too complex for rule-based approaches to have a meaningful
impact
| scraping/output/2077654063284246761.txt | [
0.020286185666918755,
-0.030923673883080482,
-0.07091283053159714,
-0.03345635533332825,
0.030050894245505333,
-0.030676670372486115,
0.0382334440946579,
0.02133309096097946,
0.025005823001265526,
-0.01770668663084507,
0.029529184103012085,
0.03202275559306145,
0.05548521876335144,
-0.0501... |
Some tasks are just too complex for rule-based approaches to have a meaningful
impact
#### (2) RULES AFTER/BEFORE ML
Rule-based pre-processing design pattern
Rule-based post-processing design pattern
The next pattern we’ll look into has a sequential nature: the business rules
either act as a first filter or as a post-processing step for the ML model.
Let’s take another look at some examples:
High-pass filter: say you want to extract dates of birth and no other dates,
so you might opt for a pure ML approach (see above). However, only a fraction
of your data actually contains dates, so running inference on every single
instance seems like a bit of a waste.
We know that every date of birth is also a date and that dates follow a fixed
pattern. Thus, we can first check whether a text contains a date via a simple
business rule and then only run inference in the cases that it does.
| scraping/output/2077654063284246761.txt | [
0.0281747467815876,
-0.037881940603256226,
-0.039602719247341156,
-0.06086701154708862,
0.038961559534072876,
-0.04049578681588173,
0.0282451044768095,
0.016339106485247612,
0.07262987643480301,
-0.00553992111235857,
0.026921050623059273,
0.04558176174759865,
0.05094237998127937,
-0.048297... |
With one simple rule, we only do inference on 2 passages instead of 7 with no
impact on performance
With a few simple rules, you can often drastically reduce the amount of
processing power you use with a minimal to non-existent impact on performance.
(Semantic) search: in a very similar fashion to what’s outlined above, you can
reduce the amount of data you process to perform a semantic search by first
filtering out those results for which you are (almost) certain that they are
not going to be relevant (e.g., have a (near) zero TF-IDF score). This kind of
setup is referred to as a “retrieve and re-rank” architecture.
Depending on the data, a double-digit percentage decrease in latency is often
attainable with a negligible impact on search performance.
| scraping/output/2077654063284246761.txt | [
0.051795318722724915,
-0.04610196501016617,
-0.04313066974282265,
-0.0488661490380764,
0.041227713227272034,
0.012836994603276253,
0.002320515923202038,
0.01840292662382126,
0.07479565590620041,
-0.02341168187558651,
0.001216141041368246,
0.03803014010190964,
0.07332513481378555,
-0.007278... |
Depending on the data, a double-digit percentage decrease in latency is often
attainable with a negligible impact on search performance.
Entity linking: let’s say we want to extract product names along with sales
prices and link the two entities together (i.e., figure out which sales price
belongs to which product name). We know our data and we make the simple
assumption that a sales price belongs to the closest product name.
This is a rule-based post-processing (“linking”) step that happens after the
ML-based extraction of sales prices and product names.
A simple rule will link the ML-extracted entities together
#### (3) RULES AND ML
Ensemble design pattern
This pattern looks to combine the outputs of rules and ML as an ensemble.
Again, let’s see some examples:
More determinism: not all mistakes are equal. Perhaps there are some patterns
that you know to be correct and want your solution to get correct every single
time. | scraping/output/2077654063284246761.txt | [
0.05340706557035446,
-0.038121115416288376,
-0.07300875335931778,
-0.0042661381885409355,
0.022677527740597725,
-0.051369115710258484,
0.02892337553203106,
0.03412548452615738,
0.09388160705566406,
-0.003283211262896657,
0.019852373749017715,
0.03974689543247223,
0.06957384198904037,
-0.03... |
More determinism: not all mistakes are equal. Perhaps there are some patterns
that you know to be correct and want your solution to get correct every single
time.
In this scenario, you can have a restrictive rule-based system that ensures
that these critical situations are covered and in parallel a more
generalizable ML-based system that aims to capture the other (complex) cases.
For example, you can have a curated gazetteer of names that you know to be
clean and correct. These names will always be recognized. The (uncommon) names
that fall outside this list will be captured by the ML model.
A gazetteer can capture common names and an NER model picks up the more niche
ones
Optimization for recall/precision: since you are essentially combining
multiple predictions, you can optimize for recall or precision by choice of
the “voting scheme” (i.e., how you go from multiple individual predictions to
one final prediction).
#### (4) ML-INFORMED RULES | scraping/output/2077654063284246761.txt | [
0.03152431920170784,
-0.028999995440244675,
-0.04767368361353874,
-0.016933253034949303,
0.045700956135988235,
-0.029965989291667938,
0.061136044561862946,
0.03677765280008316,
0.0687226876616478,
-0.03188600391149521,
0.027852483093738556,
0.0355987623333931,
0.08465919643640518,
-0.04218... |
#### (4) ML-INFORMED RULES
ML-informed rules design pattern
A more niche situation could be that your use case really requires a rule-
based system — be it for regulatory reasons (e.g., GDPR’s “Right to
explanation”) or for other reasons — but that these rules are very difficult
to determine.
In this scenario, you could use machine learning to generate optimal (RegEx)
rules.
There are actually multiple ways to achieve this — ranging from natural
language-to-RegEx Seq2Seq models like SemRegex to models that are trained on
labeled data like the evolutionary RegexGenerator algorithm and models like
TransRegex that use both natural language and labeled examples.
#### (5) RULE-INFORMED ML
Rule-informed ML design pattern
This pattern also looks to combine rules and ML but it does so by finding an
appropriate representation of RegEx results and truly integrating the domain
knowledge into the model architecture.
| scraping/output/2077654063284246761.txt | [
0.020771700888872147,
-0.002517763525247574,
-0.06673261523246765,
-0.024033457040786743,
0.0467475987970829,
-0.05191674083471298,
0.07582288980484009,
0.04534350708127022,
0.022952673956751823,
-0.039242058992385864,
0.022830761969089508,
0.039195820689201355,
0.06484311819076538,
-0.056... |
This pattern also looks to combine rules and ML but it does so by finding an
appropriate representation of RegEx results and truly integrating the domain
knowledge into the model architecture.
Theoretically, this is a very clean solution but in practice, we don’t see
(widespread) adoption of such architectures. Or at least not yet.
If you want to get some intuition as to what this would look like, check out
this paper. But at the time of writing, we wouldn’t recommend such a design
pattern.
### Conclusion
In conclusion, hybrid NLP has the potential to drastically improve the
accuracy, efficiency, reliability and/or interpretability of your solution,
especially in low labeled data settings. That is, if you do it right.
Choosing the right setup is inherently very data- and problem-specific but
hopefully the examples above have given you some intuition into which approach
to take.
## Related posts
View all
No results found. | scraping/output/2077654063284246761.txt | [
0.010760579258203506,
0.003288879757747054,
-0.0682784765958786,
-0.01714840903878212,
0.07598157972097397,
-0.06297927349805832,
0.07279402762651443,
0.032372526824474335,
0.024424182251095772,
-0.013927330262959003,
0.01739906705915928,
0.021430201828479767,
0.095877505838871,
-0.0480959... |
## Related posts
View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Large Language Model
Foundation Models
Foundation Models
Corporate
Corporate
People
People
Structured Data
Structured Data
Chat GPT
Chat GPT
Sustainability
Sustainability
Voice & Sound
Voice & Sound
Front-End Development
Front-End Development
Data Protection & Security
Data Protection & Security
Responsible/ Ethical AI
Responsible/ Ethical AI
Infrastructure
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received! | scraping/output/2077654063284246761.txt | [
0.02434760145843029,
-0.014227277599275112,
-0.03331741318106651,
-0.034821294248104095,
0.050687212496995926,
-0.047464318573474884,
0.07841642200946808,
0.06166160851716995,
0.05892032012343407,
-0.03250855579972267,
0.027568820863962173,
0.0210349652916193,
0.044752996414899826,
-0.0450... |
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/2077654063284246761.txt | [
0.009572062641382217,
-0.0171268992125988,
-0.048388510942459106,
-0.04229104518890381,
0.020016374066472054,
-0.06039201840758324,
0.06777645647525787,
0.09821482002735138,
0.06210828572511673,
-0.02630762942135334,
0.041026338934898376,
0.036781102418899536,
0.04470415785908699,
-0.02857... |
Natural language processing
🚀 Unleash the Power of Large Language Models and Foundation Models - Read our
insights and expertise on these trends in AI!
Discover more
Services
Navigate intelligence
Activate intelligence
Build intelligence
AI Solutions
Custom solutions
Solutions catalogue
Domains of expertise
References
Client cases
Customer testimonials
Resources
Resource library
Blog
Video & demo
Press
Events
Open Source
Our DNA
How we work
Mission & ambition
Responsible AI
Partners
ML6 For Good
Careers
Jobs
Team
Working at ML6
Taal
NLEN
Contact us
en
defrNL
Contact us
Blogposts
All Posts
May 18, 2022
# BERT is eating your cash: quantization and ONNXRuntime to save money
Contributors
No items found.
Subscribe to newsletter
Sign up
By clicking Sign Up you're confirming that you agree with our Terms and
Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post | scraping/output/7518757031055141065.txt | [
0.05454127490520477,
-0.020141415297985077,
-0.054917387664318085,
0.005998719017952681,
0.06998347491025925,
-0.033844031393527985,
0.07626081258058548,
0.06235469505190849,
0.04760796204209328,
-0.0372890830039978,
0.032997481524944305,
0.036326371133327484,
0.019951676949858665,
-0.0581... |
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post
In 2020, we have trained and open-sourced the first Dutch GPT2 model, in
various sizes. Of course we wanted to share this with the world by open-
sourcing the models, the code and a nice application that showcases its use.
But this nice application comes at a cost, literally…
### As-is: HuggingFace model powering a Python app
Currently, a HF model is hosted inside a Python Flask app, which uses the
pipeline API from the HF library.
A routing microservice routes to the correct model serving microservice
depending on the user request if he wants to address the 117M parameter
GPT2-small model or the 345M parameter GPT2-medium model.
PS: if you’re curious how we trained this Dutch GPT2 model: we outlined it
perfectly (if we say so ourselves) in this blogpost. If you want to get freaky
with these Dutch models yourself, you can find them on our HF Hub page. | scraping/output/7518757031055141065.txt | [
0.0360230877995491,
-0.02218773029744625,
-0.033502448350191116,
-0.0008625536574982107,
0.07639729976654053,
-0.03349388763308525,
0.07078646868467331,
0.05835326015949249,
0.05529771000146866,
-0.00585752772167325,
0.026381324976682663,
0.0415467843413353,
0.0460730604827404,
-0.06736791... |
The final user-facing application looks as follows:
Try it for yourself at https://gpt2.ml6.eu/nl
The current setup has some difficulties though:
The responses take some time to generate, especially with the medium-size
model, reducing the user experience.
Second, the container is quite big because of the large models, so we either
have to:
* autoscale it to zero to keep the cost down, but then have a large startup time from a cold start
* let it run continuously, burning cash
So, in this blogpost we’re going to improve this model serving component by
quantizing it to make it run smoother, hopefully without losing too much
expressive quality.
### Quantization to reduce the footprint
We’re not going to go into detail on what quantization is. If you wanna get a
great primer on this: we wrote a blogpost on this and other model efficiency
aspects here. | scraping/output/7518757031055141065.txt | [
0.007271141279488802,
-0.025249967351555824,
-0.036592911928892136,
-0.03486260026693344,
0.0516100637614727,
-0.018230484798550606,
0.034744035452604294,
-0.00952200312167406,
0.08844655007123947,
-0.002724629594013095,
0.04622083529829979,
-0.0000955283539951779,
0.0804501622915268,
-0.0... |
We’re not going to go into detail on what quantization is. If you wanna get a
great primer on this: we wrote a blogpost on this and other model efficiency
aspects here.
TDLR: by reducing the precision of the weights in the Linear and Embedding
layers from fp32 to int8 through a mapping action, the memory footprint of a
model is greatly reduced!
source: https://rasa.com/blog/compressing-bert-for-faster-prediction-2/
Quantization is quite an active field, so a number of libraries offer options
to quantize your model:
* PyTorch, though only for quantizing the Linear layers
* HuggingFace Optimum: an up and coming solution leveraging Intel NC and ONNX Runtime
* ONNX Runtime (ORT) itself
Even though we’re huge fans of where Optimum is heading, in this post, we used
the last solution, because of the great support for GPT2 quantization through
examples and dedicated helpers.
If you’re just here for the code goodies, you can find all of the code for
this blogpost link ! | scraping/output/7518757031055141065.txt | [
0.030868813395500183,
-0.03776999190449715,
-0.05600058659911156,
-0.018692130222916603,
0.07480636984109879,
-0.003685231786221266,
0.03629850968718529,
0.007845824584364891,
0.106461301445961,
-0.04038283973932266,
0.03585616871714592,
0.010500446893274784,
0.03125951439142227,
-0.070157... |
If you’re just here for the code goodies, you can find all of the code for
this blogpost link !
Quantization using ORT only involves three simple steps:
#### 1\. Convert the PyTorch model to an ONNX model
All the upcoming transformations happen through the ONNXRuntime (ORT) library,
so it’s only logical that these steps will require an ONNX binary. This can
easily be done using HF + ORT:
#### 2\. Optimize the model
Model optimization involves a few operations to make the model graph more
streamlined. One such example is fusing sequential operations into a single
step.
#### 3\. Quantize the model
This is where the actual quantization happens, or in other words: the mapping
of the FP32 weights values to the INT8 value range.
#### Run it using ORT
To actually use the model artifact (ONNX binary file), we of course need a
runtime to host it. What better runtime for ONNX than ONNXRuntime | scraping/output/7518757031055141065.txt | [
0.07026688009500504,
-0.052455704659223557,
-0.02344563417136669,
-0.01865733042359352,
0.08311403542757034,
-0.004926145076751709,
0.047317858785390854,
0.038298726081848145,
0.07911887019872665,
-0.010381310246884823,
0.025405991822481155,
0.02138044685125351,
0.019722774624824524,
-0.06... |
#### Run it using ORT
To actually use the model artifact (ONNX binary file), we of course need a
runtime to host it. What better runtime for ONNX than ONNXRuntime
To do this, you can easily create an ORT session, which can be fed with the
typical inputs otherwise required in a HF model (token id’s, attention masks,
etc.) to produce the output logits:
Easy-peasy right? Well, there are a few aspects around ORT sessions to make it
work well:
* IO-binding to avoid data copy
* Post-processing the logits to enable top_k and top_p sampling, beam search, temperature, etc. Instead of plain greedy decoding
* Including past inputs to improve the performance
* EOS special tag detection and processing
We won’t go into detail on all of the code needed for each of these aspects,
but you can find them all in the notebook (link again) where they are
implemented.
### Evaluation | scraping/output/7518757031055141065.txt | [
0.05195657163858414,
-0.06998977065086365,
-0.032721661031246185,
-0.036971379071474075,
0.07826106250286102,
-0.01907697319984436,
0.0243629552423954,
0.009703547693789005,
0.04568828269839287,
-0.01811276562511921,
0.03832980617880821,
-0.015469796024262905,
0.05266888812184334,
-0.06964... |
We won’t go into detail on all of the code needed for each of these aspects,
but you can find them all in the notebook (link again) where they are
implemented.
### Evaluation
So we coded up all these extra aspect to get nice predictions, and our model
is running happily on a Cloud Run instance, inside a Python app that hosts the
ORT session. Happy days!
But is it any good… ?
#### Generation quality
Of course, we want to make sure our models don’t produce garbage, so we will
look at the generation quality from a couple of angles:
The difference in output logits
A first quick check we can do is comparing the output logits of the language
modelling heads of the two models.
If the quantized model is indeed a credible stand-in for the normal model,
then the output logits should roughly follow the same value distribution
point-by-point.
So by measuring the average, median and max difference in logit values, we can
get a first idea on the quality of the potential output: | scraping/output/7518757031055141065.txt | [
0.02408429980278015,
-0.0667664036154747,
-0.04332413524389267,
0.0028224829584360123,
0.06888724863529205,
-0.02433345466852188,
0.0775444284081459,
0.024459561333060265,
0.06949952244758606,
-0.05117711424827576,
0.044664349406957626,
0.02317044325172901,
0.0399811826646328,
-0.058324873... |
So by measuring the average, median and max difference in logit values, we can
get a first idea on the quality of the potential output:
We can see that the logit values can differ quite a bit. We can also see that
the impact is less for the 345M parameter GPT2-medium than for the 117M
GPT2-small model.
Though this is a first indication that we might lose some quality, it doesn’t
speak for the true expressive capabilities of the quantized models. So let’s
continue:
The perplexity
Lucky for us, a nice metric to measure the generation quality in a more
meaningful fashion exists: perplexity! The ever-lovely peeps at HuggingFace
wrote a very nice page about it, what it does, and how to code it up (you can
find our implementation in our notebook).
We followed their approach, and measured the perplexity on the first 1000
documents of the Dutch Partition of the OSCAR corpus. This is a wide
collection of various crawled Dutch webpages. | scraping/output/7518757031055141065.txt | [
0.018422041088342667,
-0.05265058949589729,
-0.06246810778975487,
0.024772178381681442,
0.057627249509096146,
-0.01484629511833191,
0.06466400623321533,
0.04145282506942749,
0.08364500105381012,
-0.022968726232647896,
0.04160688444972038,
0.029453082010149956,
0.05587177723646164,
-0.06931... |
We followed their approach, and measured the perplexity on the first 1000
documents of the Dutch Partition of the OSCAR corpus. This is a wide
collection of various crawled Dutch webpages.
Interestingly, the perplexity increase is less high for the medium GPT2 model
compared to the small GPT2 model. Meaning the GPT2-medium model seems to
suffer less degradation from the quantization process. In line with what we
observed from the logit comparison!
The human evaluation
The kicker, the champ, the true test of generative quality!
Here are some example generations by the non-quantized and quantized model
side by side, where we ask each model to produce the next 20 tokens.
Both models generate through sampling, with top_p=0.95, top_k=50 and
temperature=0.95
Comparison in expressive quality
From the look of it, both seem to do very okay! Well enough for the online
demo, where only a few next tokens are predicted each time.
But is it any fast… ?
### Latency | scraping/output/7518757031055141065.txt | [
0.028568515554070473,
-0.02531084045767784,
-0.053442828357219696,
-0.022967467084527016,
0.042859163135290146,
-0.042095448821783066,
0.07095806300640106,
0.013707507401704788,
0.0697876363992691,
-0.02522275410592556,
0.05289217084646225,
0.03076307103037834,
0.07414592802524567,
-0.0509... |
From the look of it, both seem to do very okay! Well enough for the online
demo, where only a few next tokens are predicted each time.
But is it any fast… ?
### Latency
Now that we know the quantized models are usable, we can start to measure the
first annoyance with the as-is deployment: the startup time and request
latency.
Here we want to measure two items:
the startup time when the service experiences a cold start
When a serverless Cloud Run instance, that is scaled to 0, starts receiving
requests, it needs to perform what is called a “cold start” by deploying and
running your container application to an available machine instance, fetch the
models from Cloud Storage, and load them in to start serving requests. This of
course takes a bit of time.
Let’s compare this “warmup time” between a service serving the non-quantized
versions and the quantized versions:
the request latency | scraping/output/7518757031055141065.txt | [
0.009696527384221554,
-0.01754787378013134,
-0.0477069728076458,
-0.045974139124155045,
0.0445157065987587,
-0.044242631644010544,
0.06882742047309875,
0.006037424318492413,
0.059551533311605453,
-0.01649872213602066,
0.0439809188246727,
0.017638646066188812,
0.027417123317718506,
-0.03906... |
Let’s compare this “warmup time” between a service serving the non-quantized
versions and the quantized versions:
the request latency
To measure the response timing for each deployed model, we send a barrage of a
few hundred sequential requests to the deployed microservice. Meaning this
latency involves network latency, service overhead and model prediction time.
We repeat this a number of times, each for a string of varying sequence
length, because self-attention computational complexity scaled quadratically
with the sequence length !
Again a solid performance from the quantized models! The latency seems to be
reduced by a factor of 3–4.
But is it any cheap… ?
### Cost
Since cloud storage is basically free, we mainly look at the costs of hosting
and running the model in a microservice on Google Cloud Run.
We can easily use the Cloud Run pricing documentation to get a price estimate: | scraping/output/7518757031055141065.txt | [
0.03799314796924591,
-0.0258061233907938,
-0.03140095993876457,
-0.05348321050405502,
0.045074641704559326,
-0.025882311165332794,
0.07948163896799088,
0.020647017285227776,
0.07987908273935318,
-0.04384654015302658,
0.02432352676987648,
-0.007833635434508324,
0.03682360053062439,
-0.05255... |
We can easily use the Cloud Run pricing documentation to get a price estimate:
* The quantized gpt2-small + gpt2-medium model image fits on a 2GB, 1vCPU machine, totaling to 💲57.02
* The non-quantized gpt2-small + gpt2-medium model image fits on a 8GB, 2vCPU (because you can’t have a 1vCPU machine for that amount of memory), totaling to 💲134.78
Meaning we can reduce our cloud bill for the serving part by a factor of 2.4!
And even if the cost of the reworked deployed would be too large, we have
clearly shown that the smaller quantized container has a much lower warm-up
time, making autoscale-to-zero a valid option.
### So long!
Leveraging quantization and ORT clearly results in a nice speedup and cost
reduction! | scraping/output/7518757031055141065.txt | [
0.0215728972107172,
-0.009176368825137615,
-0.04536375403404236,
-0.031950052827596664,
0.06331763416528702,
0.015547026880085468,
0.035065386444330215,
0.023466866463422775,
0.08934919536113739,
-0.0341491736471653,
0.046813059598207474,
-0.010022265836596489,
0.04469041898846626,
-0.0542... |
### So long!
Leveraging quantization and ORT clearly results in a nice speedup and cost
reduction!
Enjoy all the money you just saved! And stay tuned for upcoming blogposts
where we leverage Triton Inference Server for full transformer hosting
enlightenment, since this is a more recommended approach for mature model
serving deployment than the presented Flask option.
## Related posts
View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Large Language Model
Foundation Models
Foundation Models
Corporate
Corporate
People
People
Structured Data
Structured Data
Chat GPT
Chat GPT
Sustainability
Sustainability
Voice & Sound
Voice & Sound
Front-End Development
Front-End Development
Data Protection & Security
Data Protection & Security
Responsible/ Ethical AI
Responsible/ Ethical AI
Infrastructure
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI | scraping/output/7518757031055141065.txt | [
0.017317134886980057,
-0.046219274401664734,
-0.02346433885395527,
-0.00548251997679472,
0.0670505166053772,
-0.03667333349585533,
0.08104583621025085,
0.039175376296043396,
0.05839923396706581,
-0.014375868253409863,
0.030676444992423058,
0.04933447018265724,
0.014448054134845734,
-0.0346... |
Data Protection & Security
Responsible/ Ethical AI
Responsible/ Ethical AI
Infrastructure
Infrastructure
Hardware & sensors
Hardware & sensors
MLOps
MLOps
Generative AI
Generative AI
Natural language processing
Natural language processing
Computer vision
Computer vision
Accelerating businesses with AI technology & experts
Contact:
info@ml6.eu
+32 9 265 95 50
Contact us
Join our newsletter
By subscribing you agree to with our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Services
Navigate Intelligence Activate IntelligenceBuild Intelligence
Solutions
Custom AI solutionsSolutions catalogueDomains of expertise
References
Client casesCustomer testimonials
Resources
Resource libraryBlogpostsVideo & DemoEventsOpen Source
About
Mission How we workResponsible AICareers
Copyright 2024
Privacy Notice | scraping/output/7518757031055141065.txt | [
0.015197353437542915,
-0.013896209187805653,
-0.05032404884696007,
-0.04189249873161316,
0.043103527277708054,
-0.05416901782155037,
0.06675327569246292,
0.07540810853242874,
0.05201537534594536,
-0.04572051391005516,
0.02756424807012081,
0.034212227910757065,
0.04668492078781128,
-0.02995... |
# This website uses cookies to ensure you get the best experience on our
website.
ML6 and selected partners use cookies and similar technologies to ensure you
get the best experience on this website. If you consent to it, we will use
cookies for analytics and marketing purposes.
See our Cookie Policy to read more about the cookies we set.
You can withdraw and manage your consent at any time, by clicking "Manage
cookies" at the bottom of each website page.
Accept all cookies Disable non-necessary cookies Cookie preferences
# Select which cookies you accept
On this site, we always set cookies that are strictly necessary, meaning they
are necessary for the site to function properly. | scraping/output/-1004585763649522392.txt | [
0.050765108317136765,
-0.02652048133313656,
-0.07669758796691895,
-0.05139049515128136,
0.04077308997511864,
-0.042942751199007034,
0.03863683342933655,
0.08199159801006317,
0.045190781354904175,
-0.004557267762720585,
0.038083504885435104,
0.04152960702776909,
0.04363519325852394,
-0.0586... |
# Select which cookies you accept
On this site, we always set cookies that are strictly necessary, meaning they
are necessary for the site to function properly.
If you consent to it, we will also set other types of cookies. You can provide
or withdraw your consent to the different types of cookies using the toggles
below. You can change or withdraw your consent at any time, by clicking the
link “Manage Cookies”, which is always available at the bottom of the site.
To learn more about what the different types of cookies do, how your data is
used when they are set etc, see our Cookie Policy.
Strictly necessary
These cookies are necessary to make the site work properly, and are always set
when you visit the site.
Vendors Teamtailor
Analytics
These cookies collect information to help us understand how the site is being
used.
Vendors Teamtailor
Marketing | scraping/output/-1004585763649522392.txt | [
0.021288348361849785,
-0.04210375249385834,
-0.06820878386497498,
-0.051707860082387924,
0.020657537505030632,
-0.04634484648704529,
0.027634739875793457,
0.0748257115483284,
0.04341094195842743,
-0.022139521315693855,
0.03489724546670914,
0.045315563678741455,
0.035655468702316284,
-0.053... |
Vendors Teamtailor
Analytics
These cookies collect information to help us understand how the site is being
used.
Vendors Teamtailor
Marketing
These cookies are used to make advertising messages more relevant to you. In
some cases, they also deliver additional functions on the site.
Vendors Youtube, Google
Accept these cookies Decline non-necessary cookies
* Join us 🕵️♀️
Employee Log in Candidate Log in to Connect Homepage ml6.eu
Facebook X LinkedIn
Locations
All
Office Amsterdam 📍
Office Berlin 📍
Office Ghent 📍
Looking for
Clear all
Locations
Office Amsterdam 📍
Office Berlin 📍
Office Ghent 📍
Locations
Show jobs
## Become an ML6 Agent 🕵️♀️
Ready to make impact with AI?
Join our team and build your future-proof consulting career!
16 jobs
* Talent Reward Officer
Talent & Culture 🫶 * Multiple locations
* Frontend Engineer
Advisory & Delivery unit 💻 * Multiple locations
* Data Engineer
Advisory & Delivery unit 💻 * Office Berlin 📍 | scraping/output/-1004585763649522392.txt | [
0.02347172610461712,
-0.0252796970307827,
-0.08788646012544632,
-0.03864283859729767,
0.050878964364528656,
-0.05280115827918053,
0.020015809684991837,
0.07652398198843002,
0.02689771167933941,
-0.015929028391838074,
0.01707717776298523,
0.056943319737911224,
0.034982990473508835,
-0.03800... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.